Sideway
output.to from Sideway
Draft for Information Only

Content

ADO.NET SQL Server
 In This Section
 See also
SQL Server Security
 In This Section
 Related Sections
 See also
Overview of SQL Server Security
 In This Section
 See also
Authentication in SQL Server
 Authentication Scenarios
 Login Types
 Mixed Mode Authentication
 External Resources
 See also
Server and Database Roles in SQL Server
 Fixed Server Roles
 Fixed Database Roles
 Database Roles and Users
  The public Role
  The dbo User Account
  The guest User Account
 See also
Ownership and User-Schema Separation in SQL Server
 User-Schema Separation
  Schema Owners and Permissions
  Built-In Schemas
   The dbo Schema
 External Resources
 See also
Authorization and Permissions in SQL Server
 The Principle of Least Privilege
 Role-Based Permissions
 Permissions Through Procedural Code
 Permission Statements
 Ownership Chains
 Procedural Code and Ownership Chaining
 External Resources
 See also
Data Encryption in SQL Server
 Keys and Algorithms
 External Resources
 See also
CLR Integration Security in SQL Server
 External Resources
 See also
Application Security Scenarios in SQL Server
 Common Threats
  SQL Injection
  Elevation of Privilege
  Probing and Intelligent Observation
  Authentication
  Passwords
 In This Section
 See also
Managing Permissions with Stored Procedures in SQL Server
 Stored Procedure Benefits
 Stored Procedure Execution
 Best Practices
 External Resources
 See also
Writing Secure Dynamic SQL in SQL Server
 Anatomy of a SQL Injection Attack
 Dynamic SQL Strategies
  EXECUTE AS
  Certificate Signing
  Cross Database Access
 External Resources
 See also
Signing Stored Procedures in SQL Server
 Creating Certificates
 External Resources
 See also
Customizing Permissions with Impersonation in SQL Server
 Context Switching with the EXECUTE AS Statement
 Granting Permissions with the EXECUTE AS Clause
  Using EXECUTE AS with REVERT
  Specifying the Execution Context
 See also
Granting Row-Level Permissions in SQL Server
 Implementing Row-level Filtering
 See also
Creating Application Roles in SQL Server
 Application Role Features
  The Principle of Least Privilege
  Application Role Enhancements
 Application Role Alternatives
 External Resources
 See also
Enabling Cross-Database Access in SQL Server
 Off By Default
 Enabling Cross-database Ownership Chaining
  Dynamic SQL
 External Resources
 See also
SQL Server Express Security
 Network Access
 User Instances
 External Resources
 See also
SQL Server Data Types and ADO.NET
 In This Section
 Reference
 See also
SqlTypes and the DataSet
 Example
 See also
Handling Null Values
 Nulls and Three-Valued Logic
 Nulls and SqlBoolean
  Understanding the ANSI_NULLS Option
 Assigning Null Values
  Multiple Column (Row) Assignment
 Assigning Null Values
  Example
 Comparing Null Values with SqlTypes and CLR Types
 See also
Comparing GUID and uniqueidentifier Values
 Working with SqlGuid Values
  Comparing GUID Values
 See also
Date and Time Data
 Date/Time Data Types Introduced in SQL Server 2008
 Date Format and Date Order
 Date/Time Data Types and Parameters
  SqlParameter Properties
  Creating Parameters
  Date Example
  Time Example
  Datetime2 Example
  DateTimeOffSet Example
  AddWithValue
 Retrieving Date and Time Data
 Specifying Date and Time Values as Literals
 Resources in SQL Server 2008 Books Online
 See also
Large UDTs
 Retrieving UDT Schemas Using GetSchema
  GetSchemaTable Column Values for UDTs
 SqlDataReader Considerations
 Specifying SqlParameters
 Retrieving Data Example
 See also
XML Data in SQL Server
 In This Section
 See also
SQL XML Column Values
 Example
 See also
Specifying XML Values as Parameters
 Example
 See also
SQL Server Binary and Large-Value Data
 In This Section
 See also
Modifying Large-Value (max) Data in ADO.NET
 Large-Value Type Restrictions
 Working with Large-Value Types in Transact-SQL
 Updating Data Using UPDATE .WRITE
 Example
 Working with Large-Value Types in ADO.NET
  Using GetSqlBytes to Retrieve Data
  Using GetSqlChars to Retrieve Data
  Using GetSqlBinary to Retrieve Data
  Using GetBytes to Retrieve Data
  Using GetValue to Retrieve Data
 Converting from Large Value Types to CLR Types
  Example
 Using Large Value Type Parameters
  Example
 See also
FILESTREAM Data
 SqlClient Support for FILESTREAM
  Creating the SQL Server Table
  Example: Reading, Overwriting, and Inserting FILESTREAM Data
 Resources in SQL Server Books Online
 See also
Inserting an Image from a File
 Example
 See also
SQL Server Data Operations in ADO.NET
 In This Section
 See also
Bulk Copy Operations in SQL Server
 In This Section
 See also
Bulk Copy Example Setup
 Table Setup
 See also
Single Bulk Copy Operations
 Example
 Performing a Bulk Copy Operation Using Transact-SQL and the Command Class
 See also
Multiple Bulk Copy Operations
 See also
Transaction and Bulk Copy Operations
 Performing a Non-transacted Bulk Copy Operation
 Performing a Dedicated Bulk Copy Operation in a Transaction
 Using Existing Transactions
 See also
Multiple Active Result Sets (MARS)
 In This Section
 Related Sections
 See also
Enabling Multiple Active Result Sets
 Enabling and Disabling MARS in the Connection String
 Special Considerations When Using MARS
  Statement Interleaving
  MARS Session Cache
  Thread Safety
  Connection Pooling
  SQL Server Batch Execution Environment
  Parallel Execution
  Detecting MARS Support
 See also
 Using Multiple Commands with MARS
  Example
 Reading and Updating Data with MARS
  Example
 See also
Asynchronous Operations
 In This Section
 See also
Windows Applications Using Callbacks
 Example
 See also
ASP.NET Applications Using Wait Handles
 Example: Wait (Any) Model
 Example: Wait (All) Model
 See also
Polling in Console Applications
 Example
 See also
Table-Valued Parameters
 Passing Multiple Rows in Previous Versions of SQL Server
 Creating Table-Valued Parameter Types
 Modifying Data with Table-Valued Parameters (Transact-SQL)
 Limitations of Table-Valued Parameters
 Configuring a SqlParameter Example
 Passing a Table-Valued Parameter to a Stored Procedure
  Passing a Table-Valued Parameter to a Parameterized SQL Statement
 Streaming Rows with a DataReader
 See also
SQL Server Features and ADO.NET
 In This Section
 See also
Enumerating Instances of SQL Server (ADO.NET)
 Retrieving an Enumerator Instance
 Enumeration Limitations
 Example
 See also
Provider Statistics for SQL Server
 Statistical Values Available
  Retrieving a Value
  Retrieving All Values
 See also
SQL Server Express User Instances
 User Instance Capabilities
 Enabling User Instances
 Connecting to a User Instance
  Using the |DataDirectory| Substitution String
 Lifetime of a User Instance Connection
 How User Instances Work
 User Instance Scenarios
 See also
Database Mirroring in SQL Server
 Specifying the Failover Partner in the Connection String
 Retrieving the Current Server Name
 SqlClient Mirroring Behavior
 Database Mirroring Resources
 See also
SQL Server Common Language Runtime Integration
 In This Section
 See also
Introduction to SQL Server CLR Integration
 Enabling CLR Integration
 Deploying a CLR Assembly
 CLR Integration Security
 Debugging a CLR Assembly
 See also
CLR User-Defined Functions
 See also
CLR User-Defined Types
 See also
CLR Stored Procedures
 See also
CLR Triggers
 See also
The Context Connection
 See also
SQL Server In-Process-Specific Behavior of ADO.NET
 See also
Query Notifications in SQL Server
 In This Section
 Reference
 See also
Enabling Query Notifications
 Query Notifications Requirements
 Enabling Query Notifications to Run Sample Code
 Query Notifications Permissions
 Choosing a Notification Object
  Using SqlDependency
  Using SqlNotificationRequest
 See also
SqlDependency in an ASP.NET Application
 About the Sample Application
 Creating the Sample Application
  Testing the Application
 See also
Detecting Changes with SqlDependency
 Security Considerations
  Example
 See also
SqlCommand Execution with a SqlNotificationRequest
 Creating the Notification Request
  Example
 See also
Snapshot Isolation in SQL Server
 Understanding Snapshot Isolation and Row Versioning
 Managing Concurrency with Isolation Levels
  Snapshot Isolation Level Extensions
 How Snapshot Isolation and Row Versioning Work
 Working with Snapshot Isolation in ADO.NET
  Example
  Example
  Using Lock Hints with Snapshot Isolation
 See also
SqlClient Support for High Availability, Disaster Recovery
 Connecting With MultiSubnetFailover
 Upgrading to Use Multi-Subnet Clusters from Database Mirroring
 Specifying Application Intent
 Read-Only Routing
 See also
SqlClient Support for LocalDB
 Remarks
 Programmatically Create a Named Instance
 See also
LINQ to SQL
 In This Section
 Related Sections
Getting Started
 Next Steps
 See also
What You Can Do With LINQ to SQL
 Selecting
 Inserting
 Updating
 Deleting
 See also
Typical Steps for Using LINQ to SQL
 Creating the Object Model
  1. Select a tool to create the model.
  2. Select the kind of code you want to generate.
  3. Refine the code file to reflect the needs of your application.
 Using the Object Model
  1. Create queries to retrieve information from the database.
  2. Override default behaviors for Insert, Update, and Delete.
  3. Set appropriate options to detect and report concurrency conflicts.
  4. Establish an inheritance hierarchy.
  5. Provide an appropriate user interface.
  6. Debug and test your application.
 See also
Get the sample databases for ADO.NET code samples
 Get the Northwind sample database for SQL Server
 Get the Northwind sample database for Microsoft Access
 Get the AdventureWorks sample database for SQL Server
 Get SQL Server Management Studio
 Get SQL Server Express
 See also
Learning by Walkthroughs
 Getting Started Walkthroughs
 General
 Troubleshooting
  Log-On Issues
    To verify or change the database log on
  Protocols
    To enable the Named Pipes protocol
  Stopping and Restarting the Service
    To stop and restart the service
 See also
Walkthrough: Simple Object Model and Query (Visual Basic)
 Prerequisites
 Overview
 Creating a LINQ to SQL Solution
  To create a LINQ to SQL solution
 Adding LINQ References and Directives
  To add System.Data.Linq
 Mapping a Class to a Database Table
  To create an entity class and map it to a database table
 Designating Properties on the Class to Represent Database Columns
  To represent characteristics of two database columns
 Specifying the Connection to the Northwind Database
  To specify the database connection
 Creating a Simple Query
  To create a simple query
 Executing the Query
  To execute the query
 Next Steps
 See also
Walkthrough: Querying Across Relationships (Visual Basic)
 Prerequisites
 Overview
 Mapping Relationships across Tables
   To add the Order entity class
 Annotating the Customer Class
   To annotate the Customer class
 Creating and Running a Query across the Customer-Order Relationship
   To access Order objects by using Customer objects
 Creating a Strongly Typed View of Your Database
   To strongly type the DataContext object
 Next Steps
 See also
Walkthrough: Manipulating Data (Visual Basic)
 Prerequisites
 Overview
 Creating a LINQ to SQL Solution
   To create a LINQ to SQL solution
 Adding LINQ References and Directives
   To add System.Data.Linq
 Adding the Northwind Code File to the Project
   To add the northwind code file to the project
 Setting Up the Database Connection
   To set up and test the database connection
 Creating a New Entity
   To add a new Customer entity object
 Updating an Entity
   To change the name of a Customer
 Deleting an Entity
   To delete a row
 Submitting Changes to the Database
   To submit changes to the database
 See also
Walkthrough: Using Only Stored Procedures (Visual Basic)
 Prerequisites
 Overview
 Creating a LINQ to SQL Solution
  To create a LINQ to SQL solution
 Adding the LINQ to SQL Assembly Reference
  To add System.Data.Linq.dll
 Adding the Northwind Code File to the Project
  To add the northwind code file to the project
 Creating a Database Connection
  To create the database connection
 Setting up the User Interface
  To set up the user interface
  To handle button clicks
 Testing the Application
  To test the application
 Next Steps
 See also
Walkthrough: Simple Object Model and Query (C#)
 Prerequisites
 Overview
 Creating a LINQ to SQL Solution
  To create a LINQ to SQL solution
 Adding LINQ References and Directives
  To add System.Data.Linq
 Mapping a Class to a Database Table
  To create an entity class and map it to a database table
 Designating Properties on the Class to Represent Database Columns
  To represent characteristics of two database columns
 Specifying the Connection to the Northwind Database
  To specify the database connection
 Creating a Simple Query
  To create a simple query
 Executing the Query
  To execute the query
 Next Steps
 See also
Walkthrough: Querying Across Relationships (C#)
 Prerequisites
 Overview
 Mapping Relationships Across Tables
  To add the Order entity class
 Annotating the Customer Class
  To annotate the Customer class
 Creating and Running a Query Across the Customer-Order Relationship
  To access Order objects by using Customer objects
 Creating a Strongly Typed View of Your Database
  To strongly type the DataContext object
 Next Steps
 See also
Walkthrough: Manipulating Data (C#)
 Prerequisites
 Overview
 Creating a LINQ to SQL Solution
   To create a LINQ to SQL solution
 Adding LINQ References and Directives
   To add System.Data.Linq
 Adding the Northwind Code File to the Project
   To add the northwind code file to the project
 Setting Up the Database Connection
   To set up and test the database connection
 Creating a New Entity
   To add a new Customer entity object
 Updating an Entity
   To change the name of a Customer
 Deleting an Entity
   To delete a row
 Submitting Changes to the Database
   To submit changes to the database
 See also
Walkthrough: Using Only Stored Procedures (C#)
 Prerequisites
 Overview
 Creating a LINQ to SQL Solution
  To create a LINQ to SQL solution
 Adding the LINQ to SQL Assembly Reference
  To add System.Data.Linq.dll
 Adding the Northwind Code File to the Project
  To add the northwind code file to the project
 Creating a Database Connection
  To create the database connection
 Setting up the User Interface
  To set up the user interface
  To handle button clicks
 Testing the Application
  To test the application
 Next Steps
 See also
Programming Guide
 In This Section
 Related Sections
Creating the Object Model
 In This Section
 Related Sections
How to: Generate the Object Model in Visual Basic or C#
 Example
 Example
 See also
How to: Generate the Object Model as an External File
 Example
 Example
 See also
How to: Generate Customized Code by Modifying a DBML File
 Example
 Example
 See also
How to: Validate DBML and External Mapping Files
  To validate a .dbml or XML file
 Alternate Method for Supplying Schema Definition
   To copy a schema definition file from a Help topic
 See also
How to: Make Entities Serializable
 Example
 See also
How to: Customize Entity Classes by Using the Code Editor
 See also
How to: Specify Database Names
  To specify the name of the database
 See also
How to: Represent Tables as Classes
  To map a class to a database table
 Example
 See also
How to: Represent Columns as Class Members
  To map a field or property to a database column
 Example
 See also
How to: Represent Primary Keys
  To designate a property or field as a primary key
 See also
How to: Map Database Relationships
 Example
 Example
 See also
How to: Represent Columns as Database-Generated
  To designate a field or property as representing a database-generated column
 See also
How to: Represent Columns as Timestamp or Version Columns
  To designate a field or property as representing a timestamp or version column
 See also
How to: Specify Database Data Types
  To specify text to define a data type in a T-SQL table
 See also
How to: Represent Computed Columns
  To represent a computed column
 See also
How to: Specify Private Storage Fields
  To specify the name of an underlying storage field
 See also
How to: Represent Columns as Allowing Null Values
  To designate a column as allowing null values
 See also
How to: Map Inheritance Hierarchies
  To map an inheritance hierarchy
 Example
 See also
How to: Specify Concurrency-Conflict Checking
 Example
 See also
Communicating with the Database
 In This Section
 See also
Communicating with the Database
 In This Section
 See also
How to: Connect to a Database
 Example
 Example
 See also
How to: Directly Execute SQL Commands
 Example
 See also
How to: Reuse a Connection Between an ADO.NET Command and a DataContext
 Example
 See also
Querying the Database
 In This Section
How to: Query for Information
 Example
 See also
How to: Retrieve Information As Read-Only
 Example
 See also
How to: Control How Much Related Data Is Retrieved
 Example
 See also
How to: Filter Related Data
 Example
 See also
How to: Turn Off Deferred Loading
 Example
 See also
How to: Directly Execute SQL Queries
 Example
 Example
 See also
How to: Store and Reuse Queries
 Example
 Example
 See also
How to: Handle Composite Keys in Queries
 Example
 Example
 See also
How to: Retrieve Many Objects At Once
 Example
 See also
How to: Filter at the DataContext Level
 Example
 See also
Query Examples
 In This Section
 Related Sections
Aggregate Queries
 In This Section
 Related Sections
Return the Average Value From a Numeric Sequence
 Example
 Example
 Example
 See also
Count the Number of Elements in a Sequence
 Example
 Example
 See also
Find the Maximum Value in a Numeric Sequence
 Example
 Example
 Example
 See also
Find the Minimum Value in a Numeric Sequence
 Example
 Example
 Example
 See also
Compute the Sum of Values in a Numeric Sequence
 Example
 Example
 See also
Return the First Element in a Sequence
 Example
 Example
 See also
Return Or Skip Elements in a Sequence
 Example
 Example
 Example
 See also
Sort Elements in a Sequence
 Example
 Example
 Example
 Example
 Example
 Example
 See also
Group Elements in a Sequence
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 See also
Eliminate Duplicate Elements from a Sequence
 Example
 See also
Determine if Any or All Elements in a Sequence Satisfy a Condition
 Example
 Example
 Example
 See also
Concatenate Two Sequences
 Example
 Example
 See also
Return the Set Difference Between Two Sequences
 Example
 See also
Return the Set Intersection of Two Sequences
 Example
 See also
Return the Set Union of Two Sequences
 Example
 See also
Convert a Sequence to an Array
 Example
 See also
Convert a Sequence to a Generic List
 Example
 See also
Convert a Type to a Generic IEnumerable
 Example
 See also
Formulate Joins and Cross-Product Queries
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 See also
Formulate Projections
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 Example
 See also
How to: Insert Rows Into the Database
  To insert a row into the database
 Example
 See also
How to: Update Rows in the Database
  To update a row in the database
 Example
 See also
How to: Delete Rows From the Database
  To delete a row in the database
 Example
 Example
 See also
How to: Submit Changes to the Database
 Example
 See also
How to: Bracket Data Submissions by Using Transactions
 Example
 See also
How to: Dynamically Create a Database
 Example
 Example
 Example
 See also
How to: Manage Change Conflicts
 In This Section
 Related Sections
How to: Detect and Resolve Conflicting Submissions
 Example
 See also
How to: Specify When Concurrency Exceptions are Thrown
 Example
 See also
How to: Specify Which Members are Tested for Concurrency Conflicts
  To always use this member for detecting conflicts
  To never use this member for detecting conflicts
  To use this member for detecting conflicts only when the application has changed the value of the member
 Example
 See also
How to: Retrieve Entity Conflict Information
 Example
 See also
How to: Retrieve Member Conflict Information
 Example
 See also
How to: Resolve Conflicts by Retaining Database Values
 Example
 See also
How to: Resolve Conflicts by Overwriting Database Values
 Example
 See also
How to: Resolve Conflicts by Merging with Database Values
 Example
 See also
Debugging Support
 In This Section
 See also
How to: Display Generated SQL
 Example
 See also
How to: Display a ChangeSet
 Example
 See also
How to: Display LINQ to SQL Commands
 Example
 See also
Troubleshooting
 Unsupported Standard Query Operators
 Memory Issues
 File Names and SQLMetal
 Class Library Projects
 Cascade Delete
 Expression Not Queryable
 DuplicateKeyException
 String Concatenation Exceptions
 Skip and Take Exceptions in SQL Server 2000
 GroupBy InvalidOperationException
 OnCreated() Partial Method
 See also
Background Information
 In This Section
 Related Sections
ADO.NET and LINQ to SQL
 Connections
 Transactions
 Direct SQL Commands
  Parameters
 See also
Analyzing LINQ to SQL Source Code
 See also
Customizing Insert, Update, and Delete Operations
 In This Section
Customizing Operations: Overview
 Loading Options
 Partial Methods
 Stored Procedures and User-Defined Functions
 See also
Insert, Update, and Delete Operations
 See also
Responsibilities of the Developer In Overriding Default Behavior
 See also
Adding Business Logic By Using Partial Methods
 Example
  Description
  Code
 Example
  Description
  Code
 See also
Data Binding
 Underlying Principle
 Operation
 IListSource Implementation
 Specialized Collections
  Generic SortableBindingList
  Generic DataBindingList
 Binding to EntitySets
  Adding a Sorting Feature
 Caching
 Cancellation
 Troubleshooting
 See also
Inheritance Support
 See also
Local Method Calls
 Example 1
 See also
N-Tier and Remote Applications with LINQ to SQL
 Additional Resources
 See also
LINQ to SQL N-Tier with ASP.NET
 See also
LINQ to SQL N-Tier with Web Services
 Setting up LINQ to SQL on the Middle Tier
 Defining the Serializable Types
 Retrieving and Inserting Data
 Tracking Changes for Updates and Deletes
 See also
Implementing Business Logic (LINQ to SQL)
 How LINQ to SQL Invokes Your Business Logic
 A Closer Look at the Extensibility Points
 See also
Data Retrieval and CUD Operations in N-Tier Applications (LINQ to SQL)
 Retrieving Data
  Client Method Call
  Middle Tier Implementation
 Inserting Data
  Middle Tier Implementation
 Deleting Data
 Updating Data
  Optimistic concurrency with timestamps
  With Subset of Original Values
  With Complete Entities
  Expected Entity Members
  State
 See also
Object Identity
 Examples
  Object Caching Example 1
  Object Caching Example 2
 See also
The LINQ to SQL Object Model
 LINQ to SQL Entity Classes and Database Tables
  Example
 LINQ to SQL Class Members and Database Columns
  Example
 LINQ to SQL Associations and Database Foreign-key Relationships
  Example
 LINQ to SQL Methods and Database Stored Procedures
  Example
 See also
Object States and Change-Tracking
 Object States
 Inserting Objects
 Deleting Objects
 Updating Objects
 See also
Optimistic Concurrency: Overview
 Example
 Conflict Detection and Resolution Checklist
 LINQ to SQL Types That Support Conflict Discovery and Resolution
 See also
Query Concepts
 In This Section
 Related Sections
LINQ to SQL Queries
 See also
Querying Across Relationships
 See also
Remote vs. Local Execution
 Remote Execution
 Local Execution
 Comparison
  Queries Against Unordered Sets
 See also
Deferred versus Immediate Loading
 See also
Retrieving Objects from the Identity Cache
 Example
 See also
Security in LINQ to SQL
 Access Control and Authentication
 Mapping and Schema Information
 Connection Strings
 See also
Serialization
 Overview
  Definitions
 Code Example
  How to Serialize the Entities
  Self-Recursive Relationships
 See also
Stored Procedures
 In This Section
 Related Sections
How to: Return Rowsets
 Example
 See also
How to: Use Stored Procedures that Take Parameters
 Example
 Example
 See also
How to: Use Stored Procedures Mapped for Multiple Result Shapes
 Example
 Example
 See also
How to: Use Stored Procedures Mapped for Sequential Result Shapes
 Example
 Example
 See also
Customizing Operations By Using Stored Procedures
 Example
  Description
  Code
 Example
  Description
  Code
 Example
  Description
  Code
 See also
Customizing Operations by Using Stored Procedures Exclusively
 Example
  Description
  Code
 See also
Transaction Support
 Explicit Local Transaction
 Explicit Distributable Transaction
 Implicit Transaction
 See also
SQL-CLR Type Mismatches
 Data Types
  Missing Counterparts
  Multiple Mappings
  User-defined Types
 Expression Semantics
  Null Semantics
  Type Conversion and Promotion
  Collation
  Operator and Function Differences
  Type Casting
 Performance Issues
 See also
SQL-CLR Custom Type Mappings
 Customization with SQLMetal or O/R Designer
 Incorporating Database Changes
 See also
User-Defined Functions
 In This Section
How to: Use Scalar-Valued User-Defined Functions
 Example
 See also
How to: Use Table-Valued User-Defined Functions
 Example
 Example
 See also
How to: Call User-Defined Functions Inline
 Example
 See also
Reference
 In This Section
 Related Sections
Reference
 In This Section
 Related Sections
Data Types and Functions
 See also
SQL-CLR Type Mapping
 Default Type Mapping
 Type Mapping Run-time Behavior Matrix
  Custom Type Mapping
 Behavior Differences Between CLR and SQL Execution
 Enum Mapping
 Numeric Mapping
  Decimal and Money Types
 Text and XML Mapping
  XML Types
  Custom Types
 Date and Time Mapping
  System.Datetime
  System.TimeSpan
 Binary Mapping
  SQL Server FILESTREAM
  Binary Serialization
 Miscellaneous Mapping
 See also
Basic Data Types
 Casting
 Equality Operators
 See also
Boolean Data Types
 See also
Null Semantics
 See also
Numeric and Comparison Operators
 Supported Operators
 See also
Sequence Operators
 Differences from .NET
 See also
System.Convert Methods
 See also
System.DateTime Methods
 Supported System.DateTime Members
 Members Not Supported by LINQ to SQL
 Method Translation Example
 SQLMethods Date and Time Methods
 See also
System.Math Methods
 Differences from .NET
 See also
System.Object Methods
 Differences from .NET
 See also
System.String Methods
 Unsupported System.String Methods in General
 Unsupported System.String Static Methods
 Unsupported System.String Non-static Methods
 Differences from .NET
 See also
System.TimeSpan Methods
 Previous Limitations
 Supported System.TimeSpan member support
  Addition and Subtraction
 See also
System.DateTimeOffset Methods
 SQLMethods Date and Time Methods
 See also
Attribute-Based Mapping
 DatabaseAttribute Attribute
 TableAttribute Attribute
 ColumnAttribute Attribute
 AssociationAttribute Attribute
 InheritanceMappingAttribute Attribute
 FunctionAttribute Attribute
 ParameterAttribute Attribute
 ResultTypeAttribute Attribute
 DataAttribute Attribute
 See also
Code Generation in LINQ to SQL
 DBML Extractor
 Code Generator
 XML Schema Definition File
 Sample DBML File
 See also
External Mapping
 Requirements
 XML Schema Definition File
 See also
Frequently Asked Questions
 Cannot Connect
 Changes to Database Lost
 Database Connection: Open How Long?
 Updating Without Querying
 Unexpected Query Results
 Unexpected Stored Procedure Results
 Serialization Errors
 Multiple DBML Files
 Avoiding Explicit Setting of Database-Generated Values on Insert or Update
 Multiple DataLoadOptions
 Errors Using SQL Compact 3.5
 Errors in Inheritance Relationships
 Provider Model
 SQL-Injection Attacks
 Changing Read-only Flag in DBML Files
 APTCA
 Mapping Data from Multiple Tables
 Connection Pooling
 Second DataContext Is Not Updated
 Cannot Call SubmitChanges in Read-only Mode
 See also
SQL Server Compact and LINQ to SQL
 Characteristics of SQL Server Compact in Relation to LINQ to SQL
 Feature Set
 See also
Standard Query Operator Translation
 Operator Support
  Concat
  Intersect, Except, Union
  Take, Skip
  Operators with No Translation
 Expression Translation
  Null semantics
  Aggregates
  Entity Arguments
  Equatable / Comparable Arguments
  Visual Basic Function Translation
 Inheritance Support
  Inheritance Mapping Restrictions
  Inheritance in Queries
 SQL Server 2008 Support
  Unsupported Query Operators
 SQL Server 2005 Support
 SQL Server 2000 Support
  Cross Apply and Outer Apply Operators
  text / ntext
  Behavior Triggered by Nested Queries
  Skip and Take Operators
 Object Materialization
 See also
Samples
 In This Section
 See also
 Source/Reference

ADO.NET SQL Server

This section describes features and behaviors that are specific to the .NET Framework Data Provider for SQL Server (System.Data.SqlClient).

System.Data.SqlClient provides access to versions of SQL Server, which encapsulates database-specific protocols. The functionality of the data provider is designed to be similar to that of the .NET Framework data providers for OLE DB, ODBC, and Oracle. System.Data.SqlClient includes a tabular data stream (TDS) parser to communicate directly with SQL Server.

Note

To use the .NET Framework Data Provider for SQL Server, an application must reference the System.Data.SqlClient namespace.

In This Section

SQL Server Security
Provides an overview of SQL Server security features, and application scenarios for creating secure ADO.NET applications that target SQL Server.

SQL Server Data Types and ADO.NET
Describes how to work with SQL Server data types and how they interact with .NET Framework data types.

SQL Server Binary and Large-Value Data
Describes how to work with large value data in SQL Server.

SQL Server Data Operations in ADO.NET
Describes how to work with data in SQL Server. Contains sections about bulk copy operations, MARS, asynchronous operations, and table-valued parameters.

SQL Server Features and ADO.NET
Describes SQL Server features that are useful for ADO.NET application developers.

LINQ to SQL
Describes the basic building blocks, processes, and techniques required for creating LINQ to SQL applications.

For complete documentation of the SQL Server Database Engine, see SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

See also

SQL Server Security

SQL Server has many features that support creating secure database applications.

Common security considerations, such as data theft or vandalism, apply regardless of the version of SQL Server you are using. Data integrity should also be considered as a security issue. If data is not protected, it is possible that it could become worthless if ad hoc data manipulation is permitted and the data is inadvertently or maliciously modified with incorrect values or deleted entirely. In addition, there are often legal requirements that must be adhered to, such as the correct storage of confidential information. Storing some kinds of personal data is proscribed entirely, depending on the laws that apply in a particular jurisdiction.

Each version of SQL Server has different security features, as does each version of Windows, with later versions having enhanced functionality over earlier ones. It is important to understand that security features alone cannot guarantee a secure database application. Each database application is unique in its requirements, execution environment, deployment model, physical location, and user population. Some applications that are local in scope may need only minimal security whereas other local applications or applications deployed over the Internet may require stringent security measures and ongoing monitoring and evaluation.

The security requirements of a SQL Server database application should be considered at design time, not as an afterthought. Evaluating threats early in the development cycle gives you the opportunity to mitigate potential damage wherever a vulnerability is detected.

Even if the initial design of an application is sound, new threats may emerge as the system evolves. By creating multiple lines of defense around your database, you can minimize the damage inflicted by a security breach. Your first line of defense is to reduce the attack surface area by never to granting more permissions than are absolutely necessary.

The topics in this section briefly describe the security features in SQL Server that are relevant for developers, with links to relevant topics in SQL Server Books Online and other resources that provide more detailed coverage.

In This Section

Overview of SQL Server Security
Describes the architecture and security features of SQL Server.

Application Security Scenarios in SQL Server
Contains topics discussing various application security scenarios for ADO.NET and SQL Server applications.

SQL Server Express Security
Describes security considerations for SQL Server Express.

Related Sections

Security Center for SQL Server Database Engine and Azure SQL Database
Describes security considerations for SQL Server and Azure SQL Database.

Security Considerations for a SQL Server Installation
Describes security concerns to consider before installing SQL Server.

See also

Overview of SQL Server Security

A defense-in-depth strategy, with overlapping layers of security, is the best way to counter security threats. SQL Server provides a security architecture that is designed to allow database administrators and developers to create secure database applications and counter threats. Each version of SQL Server has improved on previous versions of SQL Server with the introduction of new features and functionality. However, security does not ship in the box. Each application is unique in its security requirements. Developers need to understand which combination of features and functionality are most appropriate to counter known threats, and to anticipate threats that may arise in the future.

A SQL Server instance contains a hierarchical collection of entities, starting with the server. Each server contains multiple databases, and each database contains a collection of securable objects. Every SQL Server securable has associated permissions that can be granted to a principal, which is an individual, group or process granted access to SQL Server. The SQL Server security framework manages access to securable entities through authentication and authorization.

  • Authentication is the process of logging on to SQL Server by which a principal requests access by submitting credentials that the server evaluates. Authentication establishes the identity of the user or process being authenticated.

  • Authorization is the process of determining which securable resources a principal can access, and which operations are allowed for those resources.

The topics in this section cover SQL Server security fundamentals, providing links to the complete documentation in the relevant version of SQL Server Books Online.

In This Section

Authentication in SQL Server
Describes logins and authentication in SQL Server and provides links to additional resources.

Server and Database Roles in SQL Server
Describes fixed server and database roles, custom database roles, and built-in accounts and provides links to additional resources.

Ownership and User-Schema Separation in SQL Server
Describes object ownership and user-schema separation and provides links to additional resources.

Authorization and Permissions in SQL Server
Describes granting permissions using the principle of least privilege and provides links to additional resources.

Data Encryption in SQL Server
Describes data encryption options in SQL Server and provides links to additional resources.

CLR Integration Security in SQL Server
Provides links to CLR integration security resources.

See also

Authentication in SQL Server

SQL Server supports two authentication modes, Windows authentication mode and mixed mode.

  • Windows authentication is the default, and is often referred to as integrated security because this SQL Server security model is tightly integrated with Windows. Specific Windows user and group accounts are trusted to log in to SQL Server. Windows users who have already been authenticated do not have to present additional credentials.

  • Mixed mode supports authentication both by Windows and by SQL Server. User name and password pairs are maintained within SQL Server.

Important

We recommend using Windows authentication wherever possible. Windows authentication uses a series of encrypted messages to authenticate users in SQL Server. When SQL Server logins are used, SQL Server login names and encrypted passwords are passed across the network, which makes them less secure.

With Windows authentication, users are already logged onto Windows and do not have to log on separately to SQL Server. The following SqlConnection.ConnectionString specifies Windows authentication without requiring users to provide a user name or password.

"Server=MSSQL1;Database=AdventureWorks;Integrated Security=true;  

Note

Logins are distinct from database users. You must map logins or Windows groups to database users or roles in a separate operation. You then grant permissions to users or roles to access database objects.

Authentication Scenarios

Windows authentication is usually the best choice in the following situations:

  • There is a domain controller.

  • The application and the database are on the same computer.

  • You are using an instance of SQL Server Express or LocalDB.

SQL Server logins are often used in the following situations:

  • If you have a workgroup.

  • Users connect from different, non-trusted domains.

  • Internet applications, such as ASP.NET.

Note

Specifying Windows authentication does not disable SQL Server logins. Use the ALTER LOGIN DISABLE Transact-SQL statement to disable highly-privileged SQL Server logins.

Login Types

SQL Server supports three types of logins:

  • A local Windows user account or trusted domain account. SQL Server relies on Windows to authenticate the Windows user accounts.

  • Windows group. Granting access to a Windows group grants access to all Windows user logins that are members of the group.

  • SQL Server login. SQL Server stores both the username and a hash of the password in the master database, by using internal authentication methods to verify login attempts.

Note

SQL Server provides logins created from certificates or asymmetric keys that are used only for code signing. They cannot be used to connect to SQL Server.

Mixed Mode Authentication

If you must use mixed mode authentication, you must create SQL Server logins, which are stored in SQL Server. You then have to supply the SQL Server user name and password at run time.

Important

SQL Server installs with a SQL Server login named sa (an abbreviation of "system administrator"). Assign a strong password to the sa login and do not use the sa login in your application. The sa login maps to the sysadmin fixed server role, which has irrevocable administrative credentials on the whole server. There are no limits to the potential damage if an attacker gains access as a system administrator. All members of the Windows BUILTIN\Administrators group (the local administrator's group) are members of the sysadmin role by default, but can be removed from that role.

SQL Server provides Windows password policy mechanisms for SQL Server logins when it is running on Windows Server 2003 or later versions. Password complexity policies are designed to deter brute force attacks by increasing the number of possible passwords. SQL Server can apply the same complexity and expiration policies used in Windows Server 2003 to passwords used inside SQL Server.

Important

Concatenating connection strings from user input can leave you vulnerable to a connection string injection attack. Use the SqlConnectionStringBuilder to create syntactically valid connection strings at run time. For more information, see Connection String Builders.

External Resources

For more information, see the following resources.

Resource Description
Principals Describes logins and other security principals in SQL Server.

See also

Server and Database Roles in SQL Server

All versions of SQL Server use role-based security, which allows you to assign permissions to a role, or group of users, instead of to individual users. Fixed server and fixed database roles have a fixed set of permissions assigned to them.

Fixed Server Roles

Fixed server roles have a fixed set of permissions and server-wide scope. They are intended for use in administering SQL Server and the permissions assigned to them cannot be changed. Logins can be assigned to fixed server roles without having a user account in a database.

Important

The sysadmin fixed server role encompasses all other roles and has unlimited scope. Do not add principals to this role unless they are highly trusted. sysadmin role members have irrevocable administrative privileges on all server databases and resources.

Be selective when you add users to fixed server roles. For example, the bulkadmin role allows users to insert the contents of any local file into a table, which could jeopardize data integrity. See SQL Server Books Online for the complete list of fixed server roles and permissions.

Fixed Database Roles

Fixed database roles have a pre-defined set of permissions that are designed to allow you to easily manage groups of permissions. Members of the db_owner role can perform all configuration and maintenance activities on the database.

For more information about SQL Server predefined roles, see the following resources.

Resource Description
Server-Level Roles Describes fixed server roles and the permissions associated with them in SQL Server.
Database-Level Roles Describes fixed database roles and the permissions associated with them

Database Roles and Users

Logins must be mapped to database user accounts in order to work with database objects. Database users can then be added to database roles, inheriting any permission sets associated with those roles. All permissions can be granted.

You must also consider the public role, the dbo user account, and the guest account when you design security for your application.

The public Role

The public role is contained in every database, which includes system databases. It cannot be dropped and you cannot add or remove users from it. Permissions granted to the public role are inherited by all other users and roles because they belong to the public role by default. Grant public only the permissions you want all users to have.

The dbo User Account

The dbo, or database owner, is a user account that has implied permissions to perform all activities in the database. Members of the sysadmin fixed server role are automatically mapped to dbo.

Note

dbo is also the name of a schema, as discussed in Ownership and User-Schema Separation in SQL Server.

The dbo user account is frequently confused with the db_owner fixed database role. The scope of db_owner is a database; the scope of sysadmin is the whole server. Membership in the db_owner role does not confer dbo user privileges.

The guest User Account

After a user has been authenticated and allowed to log in to an instance of SQL Server, a separate user account must exist in each database the user has to access. Requiring a user account in each database prevents users from connecting to an instance of SQL Server and accessing all the databases on a server. The existence of a guest user account in the database circumvents this requirement by allowing a login without a database user account to access a database.

The guest account is a built-in account in all versions of SQL Server. By default, it is disabled in new databases. If it is enabled, you can disable it by revoking its CONNECT permission by executing the Transact-SQL REVOKE CONNECT FROM GUEST statement.

Important

Avoid using the guest account; all logins without their own database permissions obtain the database permissions granted to this account. If you must use the guest account, grant it minimum permissions.

For more information about SQL Server logins, users and roles, see the following resources.

Resource Description
Getting Started with Database Engine Permissions Contains links to topics that describe principals, roles, credentials, securables and permissions.
Principals Describes principals and contains links to topics that describe server and database roles.

See also

Ownership and User-Schema Separation in SQL Server

A core concept of SQL Server security is that owners of objects have irrevocable permissions to administer them. You cannot remove privileges from an object owner, and you cannot drop users from a database if they own objects in it.

User-Schema Separation

User-schema separation allows for more flexibility in managing database object permissions. A schema is a named container for database objects, which allows you to group objects into separate namespaces. For example, the AdventureWorks sample database contains schemas for Production, Sales, and HumanResources.

The four-part naming syntax for referring to objects specifies the schema name.

Server.Database.DatabaseSchema.DatabaseObject  

Schema Owners and Permissions

Schemas can be owned by any database principal, and a single principal can own multiple schemas. You can apply security rules to a schema, which are inherited by all objects in the schema. Once you set up access permissions for a schema, those permissions are automatically applied as new objects are added to the schema. Users can be assigned a default schema, and multiple database users can share the same schema.

By default, when developers create objects in a schema, the objects are owned by the security principal that owns the schema, not the developer. Object ownership can be transferred with ALTER AUTHORIZATION Transact-SQL statement. A schema can also contain objects that are owned by different users and have more granular permissions than those assigned to the schema, although this is not recommended because it adds complexity to managing permissions. Objects can be moved between schemas, and schema ownership can be transferred between principals. Database users can be dropped without affecting schemas.

Built-In Schemas

SQL Server ships with ten pre-defined schemas that have the same names as the built-in database users and roles. These exist mainly for backward compatibility. You can drop the schemas that have the same names as the fixed database roles if you do not need them. You cannot drop the following schemas:

  • dbo

  • guest

  • sys

  • INFORMATION_SCHEMA

If you drop them from the model database, they will not appear in new databases.

Note

The sys and INFORMATION_SCHEMA schemas are reserved for system objects. You cannot create objects in these schemas and you cannot drop them.

The dbo Schema

The dbo schema is the default schema for a newly created database. The dbo schema is owned by the dbo user account. By default, users created with the CREATE USER Transact-SQL command have dbo as their default schema.

Users who are assigned the dbo schema do not inherit the permissions of the dbo user account. No permissions are inherited from a schema by users; schema permissions are inherited by the database objects contained in the schema.

Note

When database objects are referenced by using a one-part name, SQL Server first looks in the user's default schema. If the object is not found there, SQL Server looks next in the dbo schema. If the object is not in the dbo schema, an error is returned.

External Resources

For more information on object ownership and schemas, see the following resources.

Resource Description
User-Schema Separation Describes the changes introduced by user-schema separation. Includes new behavior, its impact on ownership, catalog views, and permissions.

See also

Authorization and Permissions in SQL Server

When you create database objects, you must explicitly grant permissions to make them accessible to users. Every securable object has permissions that can be granted to a principal using permission statements.

The Principle of Least Privilege

Developing an application using a least-privileged user account (LUA) approach is an important part of a defensive, in-depth strategy for countering security threats. The LUA approach ensures that users follow the principle of least privilege and always log on with limited user accounts. Administrative tasks are broken out using fixed server roles, and the use of the sysadmin fixed server role is severely restricted.

Always follow the principle of least privilege when granting permissions to database users. Grant the minimum permissions necessary to a user or role to accomplish a given task.

Important

Developing and testing an application using the LUA approach adds a degree of difficulty to the development process. It is easier to create objects and write code while logged on as a system administrator or database owner than it is using a LUA account. However, developing applications using a highly privileged account can obfuscate the impact of reduced functionality when least privileged users attempt to run an application that requires elevated permissions in order to function correctly. Granting excessive permissions to users in order to reacquire lost functionality can leave your application vulnerable to attack. Designing, developing and testing your application logged on with a LUA account enforces a disciplined approach to security planning that eliminates unpleasant surprises and the temptation to grant elevated privileges as a quick fix. You can use a SQL Server login for testing even if your application is intended to deploy using Windows authentication.

Role-Based Permissions

Granting permissions to roles rather than to users simplifies security administration. Permission sets that are assigned to roles are inherited by all members of the role. It is easier to add or remove users from a role than it is to recreate separate permission sets for individual users. Roles can be nested; however, too many levels of nesting can degrade performance. You can also add users to fixed database roles to simplify assigning permissions.

You can grant permissions at the schema level. Users automatically inherit permissions on all new objects created in the schema; you do not need to grant permissions as new objects are created.

Permissions Through Procedural Code

Encapsulating data access through modules such as stored procedures and user-defined functions provides an additional layer of protection around your application. You can prevent users from directly interacting with database objects by granting permissions only to stored procedures or functions while denying permissions to underlying objects such as tables. SQL Server achieves this by ownership chaining.

Permission Statements

The three Transact-SQL permission statements are described in the following table.

Permission Statement Description
GRANT Grants a permission.
REVOKE Revokes a permission. This is the default state of a new object. A permission revoked from a user or role can still be inherited from other groups or roles to which the principal is assigned.
DENY DENY revokes a permission so that it cannot be inherited. DENY takes precedence over all permissions, except DENY does not apply to object owners or members of sysadmin. If you DENY permissions on an object to the public role it is denied to all users and roles except for object owners and sysadmin members.
  • The GRANT statement can assign permissions to a group or role that can be inherited by database users. However, the DENY statement takes precedence over all other permission statements. Therefore, a user who has been denied a permission cannot inherit it from another role.

Note

Members of the sysadmin fixed server role and object owners cannot be denied permissions.

Ownership Chains

SQL Server ensures that only principals that have been granted permission can access objects. When multiple database objects access each other, the sequence is known as a chain. When SQL Server is traversing the links in the chain, it evaluates permissions differently than it would if it were accessing each item separately. When an object is accessed through a chain, SQL Server first compares the object's owner to the owner of the calling object (the previous link in the chain). If both objects have the same owner, permissions on the referenced object are not checked. Whenever an object accesses another object that has a different owner, the ownership chain is broken and SQL Server must check the caller's security context.

Procedural Code and Ownership Chaining

Suppose that a user is granted execute permissions on a stored procedure that selects data from a table. If the stored procedure and the table have the same owner, the user doesn't need to be granted any permissions on the table and can even be denied permissions. However, if the stored procedure and the table have different owners, SQL Server must check the user's permissions on the table before allowing access to the data.

Note

Ownership chaining does not apply in the case of dynamic SQL statements. To call a procedure that executes an SQL statement, the caller must be granted permissions on the underlying tables, leaving your application vulnerable to SQL Injection attack. SQL Server provides new mechanisms, such as impersonation and signing modules with certificates, that do not require granting permissions on the underlying tables. These can also be used with CLR stored procedures.

External Resources

For more information, see the following resources.

Resource Description
Permissions Contains topics describing permissions hierarchy, catalog views, and permissions of fixed server and database roles.

See also

Data Encryption in SQL Server

SQL Server provides functions to encrypt and decrypt data using a certificate, asymmetric key, or symmetric key. It manages all of these in an internal certificate store. The store uses an encryption hierarchy that secures certificates and keys at one level with the layer above it in the hierarchy. This feature area of SQL Server is called Secret Storage.

The fastest mode of encryption supported by the encryption functions is symmetric key encryption. This mode is suitable for handling large volumes of data. The symmetric keys can be encrypted by certificates, passwords or other symmetric keys.

Keys and Algorithms

SQL Server supports several symmetric key encryption algorithms, including DES, Triple DES, RC2, RC4, 128-bit RC4, DESX, 128-bit AES, 192-bit AES, and 256-bit AES. The algorithms are implemented using the Windows Crypto API.

Within the scope of a database connection, SQL Server can maintain multiple open symmetric keys. An open key is retrieved from the store and is available for decrypting data. When a piece of data is decrypted, there is no need to specify the symmetric key to use. Each encrypted value contains the key identifier (key GUID) of the key used to encrypt it. The engine matches the encrypted byte stream to an open symmetric key, if the correct key has been decrypted and is open. This key is then used to perform decryption and return the data. If the correct key is not open, NULL is returned.

For an example that shows how to work with encrypted data in a database, see Encrypt a Column of Data.

External Resources

For more information on data encryption, see the following resources.

Resource Description
SQL Server Encryption Provides an overview of encryption in SQL Server. This topic includes links to additional articles.
Encryption Hierarchy Provides an overview of encryption in SQL Server. This topic provides links to additional articles.

See also

CLR Integration Security in SQL Server

Microsoft SQL Server provides the integration of the common language runtime (CLR) component of the .NET Framework. CLR integration allows you to write stored procedures, triggers, user-defined types, user-defined functions, user-defined aggregates, and streaming table-valued functions, using any .NET Framework language, such as Microsoft Visual Basic .NET or Microsoft Visual C#.

The CLR supports a security model called code access security (CAS) for managed code. In this model, permissions are granted to assemblies based on evidence supplied by the code in metadata. SQL Server integrates the user-based security model of SQL Server with the code access-based security model of the CLR.

External Resources

For more information on CLR integration with SQL Server, see the following resources.

Resource Description
Code Access Security Contains topics describing CAS in the .NET Framework.
CLR Integration Security Discusses the security model for managed code executing inside of SQL Server.

See also

Application Security Scenarios in SQL Server

There is no single correct way to create a secure SQL Server client application. Every application is unique in its requirements, deployment environment, and user population. An application that is reasonably secure when it is initially deployed can become less secure over time. It is impossible to predict with any accuracy what threats may emerge in the future.

SQL Server, as a product, has evolved over many versions to incorporate the latest security features that enable developers to create secure database applications. However, security doesn't come in the box; it requires continual monitoring and updating.

Common Threats

Developers need to understand security threats, the tools provided to counter them, and how to avoid self-inflicted security holes. Security can best be thought of as a chain, where a break in any one link compromises the strength of the whole. The following list includes some common security threats that are discussed in more detail in the topics in this section.

SQL Injection

SQL Injection is the process by which a malicious user enters Transact-SQL statements instead of valid input. If the input is passed directly to the server without being validated and if the application inadvertently executes the injected code, then the attack has the potential to damage or destroy data. You can thwart SQL Server injection attacks by using stored procedures and parameterized commands, avoiding dynamic SQL, and restricting permissions on all users.

Elevation of Privilege

Elevation of privilege attacks occur when a user is able to assume the privileges of a trusted account, such as an owner or administrator. Always run under least-privileged user accounts and assign only needed permissions. Avoid using administrative or owner accounts for executing code. This limits the amount of damage that can occur if an attack succeeds. When performing tasks that require additional permissions, use procedure signing or impersonation only for the duration of the task. You can sign stored procedures with certificates or use impersonation to temporarily assign permissions.

Probing and Intelligent Observation

A probing attack can use error messages generated by an application to search for security vulnerabilities. Implement error handling in all procedural code to prevent SQL Server error information from being returned to the end user.

Authentication

A connection string injection attack can occur when using SQL Server logins if a connection string based on user input is constructed at run time. If the connection string is not checked for valid keyword pairs, an attacker can insert extra characters, potentially accessing sensitive data or other resources on the server. Use Windows authentication wherever possible. If you must use SQL Server logins, use the SqlConnectionStringBuilder to create and validate connection strings at run time.

Passwords

Many attacks succeed because an intruder was able to obtain or guess a password for a privileged user. Passwords are your first line of defense against intruders, so setting strong passwords is essential to the security of your system. Create and enforce password policies for mixed mode authentication.

Always assign a strong password to the sa account, even when using Windows Authentication.

In This Section

Managing Permissions with Stored Procedures in SQL Server
Describes how to use stored procedures to manage permissions and control data access. Using stored procedures is an effective way to respond to many security threats.

Writing Secure Dynamic SQL in SQL Server
Describes techniques for writing secure dynamic SQL using stored procedures.

Signing Stored Procedures in SQL Server
Describes how to sign a stored procedure with a certificate to enable users to work with data they do not have direct access to. This enables stored procedures to perform operations that the caller does not have permissions to perform directly.

Customizing Permissions with Impersonation in SQL Server
Describes how to use the EXECUTE AS clause to impersonate another user. Impersonation switches the execution context from the caller to the specified user.

Granting Row-Level Permissions in SQL Server
Describes how to implement row-level permissions to restrict data access.

Creating Application Roles in SQL Server
Describes features and functionality of application roles.

Enabling Cross-Database Access in SQL Server
Describes how to enable cross-database access without jeopardizing security.

See also

Managing Permissions with Stored Procedures in SQL Server

One method of creating multiple lines of defense around your database is to implement all data access using stored procedures or user-defined functions. You revoke or deny all permissions to underlying objects, such as tables, and grant EXECUTE permissions on stored procedures. This effectively creates a security perimeter around your data and database objects.

Stored Procedure Benefits

Stored procedures have the following benefits:

  • Data logic and business rules can be encapsulated so that users can access data and objects only in ways that developers and database administrators intend.

  • Parameterized stored procedures that validate all user input can be used to thwart SQL injection attacks. If you use dynamic SQL, be sure to parameterize your commands, and never include parameter values directly into a query string.

  • Ad hoc queries and data modifications can be disallowed. This prevents users from maliciously or inadvertently destroying data or executing queries that impair performance on the server or the network.

  • Errors can be handled in procedure code without being passed directly to client applications. This prevents error messages from being returned that could aid in a probing attack. Log errors and handle them on the server.

  • Stored procedures can be written once, and accessed by many applications.

  • Client applications do not need to know anything about the underlying data structures. Stored procedure code can be changed without requiring changes in client applications as long as the changes do not affect parameter lists or returned data types.

  • Stored procedures can reduce network traffic by combining multiple operations into one procedure call.

Stored Procedure Execution

Stored procedures take advantage of ownership chaining to provide access to data so that users do not need to have explicit permission to access database objects. An ownership chain exists when objects that access each other sequentially are owned by the same user. For example, a stored procedure can call other stored procedures, or a stored procedure can access multiple tables. If all objects in the chain of execution have the same owner, then SQL Server only checks the EXECUTE permission for the caller, not the caller's permissions on other objects. Therefore you need to grant only EXECUTE permissions on stored procedures; you can revoke or deny all permissions on the underlying tables.

Best Practices

Simply writing stored procedures isn't enough to adequately secure your application. You should also consider the following potential security holes.

  • Grant EXECUTE permissions on the stored procedures for database roles you want to be able to access the data.

  • Revoke or deny all permissions to the underlying tables for all roles and users in the database, including the public role. All users inherit permissions from public. Therefore denying permissions to public means that only owners and sysadmin members have access; all other users will be unable to inherit permissions from membership in other roles.

  • Do not add users or roles to the sysadmin or db_owner roles. System administrators and database owners can access all database objects.

  • Disable the guest account. This will prevent anonymous users from connecting to the database. The guest account is disabled by default in new databases.

  • Implement error handling and log errors.

  • Create parameterized stored procedures that validate all user input. Treat all user input as untrusted.

  • Avoid dynamic SQL unless absolutely necessary. Use the Transact-SQL QUOTENAME() function to delimit a string value and escape any occurrence of the delimiter in the input string.

External Resources

For more information, see the following resources.

Resource Description
Stored Procedures and SQL Injection in SQL Server Books Online Topics describe how to create stored procedures and how SQL Injection works.

See also

Writing Secure Dynamic SQL in SQL Server

SQL Injection is the process by which a malicious user enters Transact-SQL statements instead of valid input. If the input is passed directly to the server without being validated and if the application inadvertently executes the injected code, the attack has the potential to damage or destroy data.

Any procedure that constructs SQL statements should be reviewed for injection vulnerabilities because SQL Server will execute all syntactically valid queries that it receives. Even parameterized data can be manipulated by a skilled and determined attacker. If you use dynamic SQL, be sure to parameterize your commands, and never include parameter values directly into the query string.

Anatomy of a SQL Injection Attack

The injection process works by prematurely terminating a text string and appending a new command. Because the inserted command may have additional strings appended to it before it is executed, the malefactor terminates the injected string with a comment mark "--". Subsequent text is ignored at execution time. Multiple commands can be inserted using a semicolon (;) delimiter.

As long as injected SQL code is syntactically correct, tampering cannot be detected programmatically. Therefore, you must validate all user input and carefully review code that executes constructed SQL commands in the server that you are using. Never concatenate user input that is not validated. String concatenation is the primary point of entry for script injection.

Here are some helpful guidelines:

  • Never build Transact-SQL statements directly from user input; use stored procedures to validate user input.

  • Validate user input by testing type, length, format, and range. Use the Transact-SQL QUOTENAME() function to escape system names or the REPLACE() function to escape any character in a string.

  • Implement multiple layers of validation in each tier of your application.

  • Test the size and data type of input and enforce appropriate limits. This can help prevent deliberate buffer overruns.

  • Test the content of string variables and accept only expected values. Reject entries that contain binary data, escape sequences, and comment characters.

  • When you are working with XML documents, validate all data against its schema as it is entered.

  • In multi-tiered environments, all data should be validated before admission to the trusted zone.

  • Do not accept the following strings in fields from which file names can be constructed: AUX, CLOCK,COM1throughCOM8,CON,CONFIG

  • , LPT1 through LPT8, NUL, and PRN.

  • Use SqlParameter objects with stored procedures and commands to provide type checking and length validation.

  • Use Regex expressions in client code to filter invalid characters.

Dynamic SQL Strategies

Executing dynamically created SQL statements in your procedural code breaks the ownership chain, causing SQL Server to check the permissions of the caller against the objects being accessed by the dynamic SQL.

SQL Server has methods for granting users access to data using stored procedures and user-defined functions that execute dynamic SQL.

EXECUTE AS

The EXECUTE AS clause replaces the permissions of the caller with that of the user specified in the EXECUTE AS clause. Nested stored procedures or triggers execute under the security context of the proxy user. This can break applications that rely on row-level security or require auditing. Some functions that return the identity of the user return the user specified in the EXECUTE AS clause, not the original caller. Execution context is reverted to the original caller only after execution of the procedure or when a REVERT statement is issued.

Certificate Signing

When a stored procedure that has been signed with a certificate executes, the permissions granted to the certificate user are merged with those of the caller. The execution context remains the same; the certificate user does not impersonate the caller. Signing stored procedures requires several steps to implement. Each time the procedure is modified, it must be re-signed.

Cross Database Access

Cross-database ownership chaining does not work in cases where dynamically created SQL statements are executed. You can work around this in SQL Server by creating a stored procedure that accesses data in another database and signing the procedure with a certificate that exists in both databases. This gives users access to the database resources used by the procedure without granting them database access or permissions.

External Resources

For more information, see the following resources.

Resource Description
Stored Procedures and SQL Injection in SQL Server Books Online Topics describe how to create stored procedures and how SQL Injection works.

See also

Signing Stored Procedures in SQL Server

A digital signature is a data digest encrypted with the private key of the signer. The private key ensures that the digital signature is unique to its bearer or owner. You can sign stored procedures, functions (except for inline table-valued functions), triggers, and assemblies.

You can sign a stored procedure with a certificate or an asymmetric key. This is designed for scenarios when permissions cannot be inherited through ownership chaining or when the ownership chain is broken, such as dynamic SQL. You can then create a user mapped to the certificate, granting the certificate user permissions on the objects the stored procedure needs to access.

You can also create a login mapped to the same certificate, and then grant any necessary server-level permissions to that login, or add the login to one or more of the fixed server roles. This is designed to avoid enabling the TRUSTWORTHY database setting for scenarios in which higher level permissions are needed.

When the stored procedure is executed, SQL Server combines the permissions of the certificate user and/or login with those of the caller. Unlike the EXECUTE AS clause, it does not change the execution context of the procedure. Built-in functions that return login and user names return the name of the caller, not the certificate user name.

Creating Certificates

When you sign a stored procedure with a certificate or asymmetric key, a data digest consisting of the encrypted hash of the stored procedure code, along with the execute-as user, is created using the private key. At run time, the data digest is decrypted with the public key and compared with the hash value of the stored procedure. Changing the execute-as user invalidates the hash value so that the digital signature no longer matches. Modifying the stored procedure drops the signature entirely, which prevents someone who does not have access to the private key from changing the stored procedure code. In either case, you must re-sign the procedure each time you change the code or the execute-as user.

There are two required steps involved in signing a module:

  1. Create a certificate using the Transact-SQL CREATE CERTIFICATE [certificateName] statement. This statement has several options for setting a start and end date and a password. The default expiration date is one year.

  2. Sign the procedure with the certificate using the Transact-SQL ADD SIGNATURE TO [procedureName] BY CERTIFICATE [certificateName] statement.

Once the module has been signed, one or more principals needs to be created in order to hold the additional permissions that should be associated with the certificate.

If the module needs additional database-level permissions:

  1. Create a database user associated with that certificate using the Transact-SQL CREATE USER [userName] FROM CERTIFICATE [certificateName] statement. This user exists in the database only and is not associated with a login unless a login has also been created from that same certificate.

  2. Grant the certificate user the required database-level permissions.

If the module needs additional server-level permissions:

  1. Copy the certificate to the master database.

  2. Create a login associated with that certificate using the Transact-SQL CREATE LOGIN [userName] FROM CERTIFICATE [certificateName] statement.

  3. Grant the certificate login the required server-level permissions.

Note

A certificate cannot grant permissions to a user that has had permissions revoked using the DENY statement. DENY always takes precedence over GRANT, preventing the caller from inheriting permissions granted to the certificate user.

External Resources

For more information, see the following resources.

Resource Description
Module Signing in SQL Server Books Online Describes module signing, providing a sample scenario and links to the relevant Transact-SQL topics.
Signing Stored Procedures with a Certificate in SQL Server Books Online Provides a tutorial for signing a stored procedure with a certificate.

See also

Customizing Permissions with Impersonation in SQL Server

Many applications use stored procedures to access data, relying on ownership chaining to restrict access to base tables. You can grant EXECUTE permissions on stored procedures, revoking or denying permissions on the base tables. SQL Server does not check the permissions of the caller if the stored procedure and tables have the same owner. However, ownership chaining doesn't work if objects have different owners or in the case of dynamic SQL.

You can use the EXECUTE AS clause in a stored procedure when the caller doesn't have permissions on the referenced database objects. The effect of the EXECUTE AS clause is that the execution context is switched to the proxy user. All code, as well as any calls to nested stored procedures or triggers, executes under the security context of the proxy user. Execution context is reverted to the original caller only after execution of the procedure or when a REVERT statement is issued.

Context Switching with the EXECUTE AS Statement

The Transact-SQL EXECUTE AS statement allows you to switch the execution context of a statement by impersonating another login or database user. This is a useful technique for testing queries and procedures as another user.

EXECUTE AS LOGIN = 'loginName';  
EXECUTE AS USER = 'userName';  

You must have IMPERSONATE permissions on the login or user you are impersonating. This permission is implied for sysadmin for all databases, and db_owner role members in databases that they own.

Granting Permissions with the EXECUTE AS Clause

You can use the EXECUTE AS clause in the definition header of a stored procedure, trigger, or user-defined function (except for inline table-valued functions). This causes the procedure to execute in the context of the user name or keyword specified in the EXECUTE AS clause. You can create a proxy user in the database that is not mapped to a login, granting it only the necessary permissions on the objects accessed by the procedure. Only the proxy user specified in the EXECUTE AS clause must have permissions on all objects accessed by the module.

Note

Some actions, such as TRUNCATE TABLE, do not have grantable permissions. By incorporating the statement within a procedure and specifying a proxy user who has ALTER TABLE permissions, you can extend the permissions to truncate the table to callers who have only EXECUTE permissions on the procedure.

The context specified in the EXECUTE AS clause is valid for the duration of the procedure, including nested procedures and triggers. Context reverts to the caller when execution is complete or the REVERT statement is issued.

There are three steps involved in using the EXECUTE AS clause in a procedure.

  1. Create a proxy user in the database that is not mapped to a login. This is not required, but it helps when managing permissions.
CREATE USER proxyUser WITHOUT LOGIN  
  1. Grant the proxy user the necessary permissions.

  2. Add the EXECUTE AS clause to the stored procedure or user-defined function.

CREATE PROCEDURE [procName] WITH EXECUTE AS 'proxyUser' AS ...  

Note

Applications that require auditing can break because the original security context of the caller is not retained. Built-in functions that return the identity of the current user, such as SESSION_USER, USER, or USER_NAME, return the user associated with the EXECUTE AS clause, not the original caller.

Using EXECUTE AS with REVERT

You can use the Transact-SQL REVERT statement to revert to the original execution context.

The optional clause, WITH NO REVERT COOKIE = @variableName, allows you switch the execution context back to the caller if the @variableName variable contains the correct value. This allows you to switch the execution context back to the caller in environments where connection pooling is used. Because the value of @variableName is known only to the caller of the EXECUTE AS statement, the caller can guarantee that the execution context cannot be changed by the end user that invokes the application. When the connection is closed, it is returned to the pool. For more information on connection pooling in ADO.NET, see SQL Server Connection Pooling (ADO.NET).

Specifying the Execution Context

In addition to specifying a user, you can also use EXECUTE AS with any of the following keywords.

  • CALLER. Executing as CALLER is the default; if no other option is specified, then the procedure executes in the security context of the caller.

  • OWNER. Executing as OWNER executes the procedure in the context of the procedure owner. If the procedure is created in a schema owned by dbo or the database owner, the procedure will execute with unrestricted permissions.

  • SELF. Executing as SELF executes in the security context of the creator of the stored procedure. This is equivalent to executing as a specified user, where the specified user is the person creating or altering the procedure.

See also

Granting Row-Level Permissions in SQL Server

In some scenarios, there is a requirement to control access to data at a more granular level than what simply granting, revoking, or denying permissions provides. For example, a hospital database application may require individual Doctors to be restricted to accessing information related to only their patients. Similar requirements exist in many environments, including finance, law, government, and military applications. To help address these scenarios, SQL Server 2016 provides a Row-Level Security feature that simplifies and centralizes row-level access logic in a security policy. For earlier versions of SQL Server, similar functionality can be achieved by using views to enact row-level filtering.

Implementing Row-level Filtering

Row-level filtering is used for applications storing information in a single table like in the hospital example above. To implement row-level filtering each row has a column that defines a differentiating parameter, such as a user name, label or other identifier. You create either a security policy or a view on the table, which filters the rows that the user can access. You then create parameterized stored procedures, which control the types of queries the user can execute.

The following example describes how to configure row-level filtering based on a user or login name:

  • Create the table, adding a column to store the name.

  • Enable row-level filtering:

    If you are using SQL Server 2016 or higher, or Azure SQL Database, create a security policy that adds a predicate on the table restricting the rows returned to those that match either the current database user (using the CURRENT_USER() built-in function) or the current login name (using the SUSER_SNAME() built-in function):

    SQL
    CREATE SCHEMA Security
    GO
    
    CREATE FUNCTION Security.userAccessPredicate(@UserName sysname)
        RETURNS TABLE
        WITH SCHEMABINDING
    AS
        RETURN SELECT 1 AS accessResult
        WHERE @UserName = SUSER_SNAME()
    GO
    
    CREATE SECURITY POLICY Security.userAccessPolicy
        ADD FILTER PREDICATE Security.userAccessPredicate(UserName) ON dbo.MyTable,
        ADD BLOCK PREDICATE Security.userAccessPredicate(UserName) ON dbo.MyTable
    GO
    
  • If you are using a version of SQL Server prior to 2016, you can achieve similar functionality using a view:

    SQL
    CREATE VIEW vw_MyTable
    AS
        RETURN SELECT * FROM MyTable
        WHERE UserName = SUSER_SNAME()
    GO
    
  • Create stored procedures to select, insert, update, and delete data. If the filtering is enacted by a security policy, the stored procedures should perform these operations on the base table directly; otherwise, if the filtering is enacted by a view, the stored procedures should instead operate against the view. The security policy or view will automatically filter the rows returned or modified by user queries, and the stored procedure will provide a harder security boundary to prevent users with direct query access from successfully running queries that can infer the existence of filtered data.

  • For stored procedures that insert data, capture the user name using the same function specified in the security policy or view, and insert that value into the UserName column.

  • Deny all permissions on the tables (and views, if applicable) to the public role. Users will not be able to inherit permissions from other database roles, because the filter predicate is based on user or login names, not on roles.

  • Grant EXECUTE on the stored procedures to database roles. Users can only access data through the stored procedures provided.

See also

Creating Application Roles in SQL Server

Application roles provide a way to assign permissions to an application instead of a database role or user. Users can connect to the database, activate the application role, and assume the permissions granted to the application. The permissions granted to the application role are in force for the duration of the connection.

Important

Application roles are activated when a client application supplies an application role name and a password in the connection string. They present a security vulnerability in a two-tier application because the password must be stored on the client computer. In a three-tier application, you can store the password so that it cannot be accessed by users of the application.

Application Role Features

Application roles have the following features:

  • Unlike database roles, application roles contain no members.

  • Application roles are activated when an application supplies the application role name and a password to the sp_setapprole system stored procedure.

  • The password must be stored on the client computer and supplied at run time; an application role cannot be activated from inside of SQL Server.

  • The password is not encrypted. The parameter password is stored as a one-way hash.

  • Once activated, permissions acquired through the application role remain in effect for the duration of the connection.

  • The application role inherits permissions granted to the public role.

  • If a member of the sysadmin fixed server role activates an application role, the security context switches to that of the application role for the duration of the connection.

  • If you create a guest account in a database that has an application role, you do not need to create a database user account for the application role or for any of the logins that invoke it. Application roles can directly access another database only if a guest account exists in the second database

  • Built-in functions that return login names, such as SYSTEM_USER, return the name of the login that invoked the application role. Built-in functions that return database user names return the name of the application role.

The Principle of Least Privilege

Application roles should be granted only required permissions in case the password is compromised. Permissions to the public role should be revoked in any database using an application role. Disable the guest account in any database you do not want callers of the application role to have access to.

Application Role Enhancements

The execution context can be switched back to the original caller after activating an application role, removing the need to disable connection pooling. The sp_setapprole procedure has a new option that creates a cookie, which contains context information about the caller. You can revert the session by calling the sp_unsetapprole procedure, passing it the cookie.

Application Role Alternatives

Application roles depend on the security of a password, which presents a potential security vulnerability. Passwords may be exposed by being embedded in application code or saved on disk.

You may want to consider the following alternatives.

  • Use context switching with the EXECUTE AS statement with its NO REVERT and WITH COOKIE clauses. You can create a user account in a database that is not mapped to a login. You then assign permissions to this account. Using EXECUTE AS with a login-less user is more secure because it is permission-based, not password-based. For more information, see Customizing Permissions with Impersonation in SQL Server.

  • Sign stored procedures with certificates, granting only permission to execute the procedures. For more information, see Signing Stored Procedures in SQL Server.

External Resources

For more information, see the following resources.

Resource Description
Application Roles Describes how to create and use application roles in SQL Server 2008.

See also

Enabling Cross-Database Access in SQL Server

Cross-database ownership chaining occurs when a procedure in one database depends on objects in another database. A cross-database ownership chain works in the same way as ownership chaining within a single database, except that an unbroken ownership chain requires that all the object owners are mapped to the same login account. If the source object in the source database and the target objects in the target databases are owned by the same login account, SQL Server does not check permissions on the target objects.

Off By Default

Ownership chaining across databases is turned off by default. Microsoft recommends that you disable cross-database ownership chaining because it exposes you to the following security risks:

  • Database owners and members of the db_ddladmin or the db_owners database roles can create objects that are owned by other users. These objects can potentially target objects in other databases. This means that if you enable cross-database ownership chaining, you must fully trust these users with data in all databases.

  • Users with CREATE DATABASE permission can create new databases and attach existing databases. If cross-database ownership chaining is enabled, these users can access objects in other databases that they might not have privileges in from the newly created or attached databases that they create.

Enabling Cross-database Ownership Chaining

Cross-database ownership chaining should only be enabled in environments where you can fully trust highly-privileged users. It can be configured during setup for all databases, or selectively for specific databases using the Transact-SQL commands sp_configure and ALTER DATABASE.

To selectively configure cross-database ownership chaining, use sp_configure to turn it off for the server. Then use the ALTER DATABASE command with SET DB_CHAINING ON to configure cross-database ownership chaining for only the databases that require it.

The following sample turns on cross-database ownership chaining for all databases:

EXECUTE sp_configure 'show advanced', 1;  
RECONFIGURE;  
EXECUTE sp_configure 'cross db ownership chaining', 1;  
RECONFIGURE;  

The following sample turns on cross-database ownership chaining for specific databases:

ALTER DATABASE Database1 SET DB_CHAINING ON;  
ALTER DATABASE Database2 SET DB_CHAINING ON;  

Dynamic SQL

Cross-database ownership chaining does not work in cases where dynamically created SQL statements are executed unless the same user exists in both databases. You can work around this in SQL Server by creating a stored procedure that accesses data in another database and signing the procedure with a certificate that exists in both databases. This gives users access to the database resources used by the procedure without granting them database access or permissions.

External Resources

For more information, see the following resources.

Resource Description
Extending Database Impersonation by Using EXECUTE AS and Cross DB Ownership Chaining Option. Articles describe how to configure cross-database ownership chaining for an instance of SQL Server.

See also

SQL Server Express Security

Microsoft SQL Server Express Edition (SQL Server Express) is based on Microsoft SQL Server, and supports most of the features of the database engine. It is designed so that nonessential features and network connectivity are off by default. This reduces the surface area available for attack by a malicious user.

SQL Server Express is usually installed as a named instance. The default name of the instance is SQLExpress. A named instance is identified by the network name of the computer plus the instance name that you specify during installation.

Network Access

For security reasons, networking protocols are disabled by default in SQL Server Express. This prevents attacks from outside users that might compromise the computer that hosts the instance of SQL Server Express. You must explicitly enable network connectivity and start the SQL Server Browser service to connect to a SQL Server Express instance from another computer.

Once network connectivity is enabled, a SQL Server Express instance has the same security requirements as the other editions of SQL Server.

User Instances

A user instance is a separate instance of the SQL Server Express database engine that is generated by a parent instance of SQL Server Express. The primary goal of a user instance is to allow users who are running Windows under a least-privilege user account to have system administrator (sysadmin) privileges on the SQL Server Express instance on their local computer. User instances are not intended for users who are system administrators on their own computers.

A user instance is generated from a primary instance of SQL Server or SQL Server Express on behalf of a user. It runs as a user process under the Windows security context of the user, not as a service. SQL Server logins are disallowed; only Windows logins are supported. This prevents software executing on a user instance from making system-wide changes that the user would not have permissions to make. A user instance is also known as a child or client instance, and is sometimes referred to by using the RANU acronym ("run as normal user").

Each user instance is isolated from its parent instance and from other user instances running on the same computer. Databases installed on user instances are opened in single-user mode only; multiple users cannot connect to them. Replication, distributed queries and remote connections are disabled for user instances. When connected to a user instance, users do not have any special privileges on the parent SQL Server Express instance.

External Resources

For more information about SQL Server Express, see the following resources.

Microsoft SQL Server 2005 Express Edition Books Online Complete documentation for SQL Server 2005 Express Edition.
User Instances for Non-Administrators in SQL Server Books Online Describes how to create and deploy user instances.
SQL Server Express User Instances Describes user instance capabilities in an ADO.NET application. Provides information about how to enable a user instance, connect to a user instance using a SqlConnection, user instance lifetime, and user instance scenarios.

See also

SQL Server Data Types and ADO.NET

SQL Server and the .NET Framework are based on different type systems, which can result in potential data loss. To preserve data integrity, the .NET Framework Data Provider for SQL Server (System.Data.SqlClient) provides typed accessor methods for working with SQL Server data. You can use the enumerations in the SqlDbType classes to specify SqlParameter data types.

For more information and a table that describes the data type mappings between SQL Server and .NET Framework data types, see SQL Server Data Type Mappings.

SQL Server 2008 introduces new data types that are designed to meet business needs to work with date and time, structured, semi-structured, and unstructured data. These are documented in SQL Server 2008 Books Online.

The SQL Server data types that are available for use in your application depends on the version of SQL Server that you are using. For more information, see the relevant version of SQL Server Books Online in the following table.

SQL Server Books Online

  1. Data Types (Database Engine)

In This Section

SqlTypes and the DataSet
Describes type support for SqlTypes in the DataSet.

Handling Null Values
Demonstrates how to work with null values and three-valued logic.

Comparing GUID and uniqueidentifier Values
Demonstrates how to work with GUID and uniqueidentifier values in SQL Server and the .NET Framework.

Date and Time Data
Describes how to use the new date and time data types introduced in SQL Server 2008.

Large UDTs
Demonstrates how to retrieve data from large value UDTs introduced in SQL Server 2008.

XML Data in SQL Server
Describes how to work with XML data retrieved from SQL Server.

Reference

DataSet
Describes the DataSet class and all of its members.

System.Data.SqlTypes
Describes the SqlTypes namespace and all of its members.

SqlDbType
Describes the SqlDbType enumeration and all of its members.

DbType
Describes the DbType enumeration and all of its members.

See also

SqlTypes and the DataSet

ADO.NET 2.0 introduced enhanced type support for the DataSet through the System.Data.SqlTypes namespace. The types in System.Data.SqlTypes are designed to provide data types with the same semantics and precision as the data types in a SQL Server database. Each data type in System.Data.SqlTypes has an equivalent data type in SQL Server, with the same underlying data representation.

Using System.Data.SqlTypes directly in a DataSet confers several benefits when working with SQL Server data types. System.Data.SqlTypes supports the same semantics as SQL Server native data types. Specifying one of the System.Data.SqlTypes in the definition of a DataColumn eliminates the loss of precision that can occur when converting decimal or numeric data types to one of the common language runtime (CLR) data types.

Example

The following example creates a DataTable object, explicitly defining the DataColumn data types by using System.Data.SqlTypes instead of CLR types. The code fills the DataTable with data from the Sales.SalesOrderDetail table in the AdventureWorks database in SQL Server. The output displayed in the console window shows the data type of each column, and the values retrieved from SQL Server.

C#
static private void GetSqlTypesAW(string connectionString)
{
    // Create a DataTable and specify a SqlType
    // for each column.
    DataTable table = new DataTable();
    DataColumn icolumnolumn =
        table.Columns.Add("SalesOrderID", typeof(SqlInt32));
    DataColumn priceColumn =
        table.Columns.Add("UnitPrice", typeof(SqlMoney));
    DataColumn totalColumn =
        table.Columns.Add("LineTotal", typeof(SqlDecimal));
    DataColumn columnModifiedDate =
        table.Columns.Add("ModifiedDate", typeof(SqlDateTime));

    // Open a connection to SQL Server and fill the DataTable
    // with data from the Sales.SalesOrderDetail table
    // in the AdventureWorks sample database.
    using (SqlConnection connection = new SqlConnection(connectionString))
    {
        string queryString =
            "SELECT TOP 5 SalesOrderID, UnitPrice, LineTotal, ModifiedDate "
            + "FROM Sales.SalesOrderDetail WHERE LineTotal < @LineTotal";

        // Create the SqlCommand.
        SqlCommand command = new SqlCommand(queryString, connection);

        // Create the SqlParameter and assign a value.
        SqlParameter parameter =
            new SqlParameter("@LineTotal", SqlDbType.Decimal);
        parameter.Value = 1.5;
        command.Parameters.Add(parameter);

        // Open the connection and load the data.
        connection.Open();
        SqlDataReader reader =
            command.ExecuteReader(CommandBehavior.CloseConnection);
        table.Load(reader);

        // Close the SqlDataReader.
        reader.Close();
    }

    // Display the SqlType of each column.
    Console.WriteLine("Data Types:");
    foreach (DataColumn column in table.Columns)
    {
        Console.WriteLine(" {0} -- {1}",
            column.ColumnName, column.DataType.UnderlyingSystemType);
    }

    // Display the value for each row.
    Console.WriteLine("Values:");
    foreach (DataRow row in table.Rows)
    {
        Console.Write(" {0}, ", row["SalesOrderID"]);
        Console.Write(" {0}, ", row["UnitPrice"]);
        Console.Write(" {0}, ", row["LineTotal"]);
        Console.Write(" {0} ", row["ModifiedDate"]);
        Console.WriteLine();
    }
}

See also

Handling Null Values

A null value in a relational database is used when the value in a column is unknown or missing. A null is neither an empty string (for character or datetime data types) nor a zero value (for numeric data types). The ANSI SQL-92 specification states that a null must be the same for all data types, so that all nulls are handled consistently. The System.Data.SqlTypes namespace provides null semantics by implementing the INullable interface. Each of the data types in System.Data.SqlTypes has its own IsNull property and a Null value that can be assigned to an instance of that data type.

Note

The .NET Framework version 2.0 introduced support for nullable types, which allow programmers to extend a value type to represent all values of the underlying type. These CLR nullable types represent an instance of the Nullable structure. This capability is especially useful when value types are boxed and unboxed, providing enhanced compatibility with object types. CLR nullable types are not intended for storage of database nulls because an ANSI SQL null does not behave the same way as a null reference (or Nothing in Visual Basic). For working with database ANSI SQL null values, use System.Data.SqlTypes nulls rather than Nullable. For more information on working with CLR nullable types in Visual Basic see Nullable Value Types, and for C# see Using Nullable Types.

Nulls and Three-Valued Logic

Allowing null values in column definitions introduces three-valued logic into your application. A comparison can evaluate to one of three conditions:

  • True

  • False

  • Unknown

Because null is considered to be unknown, two null values compared to each other are not considered to be equal. In expressions using arithmetic operators, if any of the operands is null, the result is null as well.

Nulls and SqlBoolean

Comparison between any System.Data.SqlTypes will return a SqlBoolean. The IsNull function for each SqlType returns a SqlBoolean and can be used to check for null values. The following truth tables show how the AND, OR, and NOT operators function in the presence of a null value. (T=true, F=false, and U=unknown, or null.)

Truth Table

Understanding the ANSI_NULLS Option

System.Data.SqlTypes provides the same semantics as when the ANSI_NULLS option is set on in SQL Server. All arithmetic operators (+, -, *, /, %), bitwise operators (~, &, |), and most functions return null if any of the operands or arguments is null, except for the property IsNull.

The ANSI SQL-92 standard does not support columnName = NULL in a WHERE clause. In SQL Server, the ANSI_NULLS option controls both default nullability in the database and evaluation of comparisons against null values. If ANSI_NULLS is turned on (the default), the IS NULL operator must be used in expressions when testing for null values. For example, the following comparison always yields unknown when ANSI_NULLS is on:

colname > NULL  

Comparison to a variable containing a null value also yields unknown:

colname > @MyVariable  

Use the IS NULL or IS NOT NULL predicate to test for a null value. This can add complexity to the WHERE clause. For example, the TerritoryID column in the AdventureWorks Customer table allows null values. If a SELECT statement is to test for null values in addition to others, it must include an IS NULL predicate:

SELECT CustomerID, AccountNumber, TerritoryID  
FROM AdventureWorks.Sales.Customer  
WHERE TerritoryID IN (1, 2, 3)  
   OR TerritoryID IS NULL  

If you set ANSI_NULLS off in SQL Server, you can create expressions that use the equality operator to compare to null. However, you can't prevent different connections from setting null options for that connection. Using IS NULL to test for null values always works, regardless of the ANSI_NULLS settings for a connection.

Setting ANSI_NULLS off is not supported in a DataSet, which always follows the ANSI SQL-92 standard for handling null values in System.Data.SqlTypes.

Assigning Null Values

Null values are special, and their storage and assignment semantics differ across different type systems and storage systems. A Dataset is designed to be used with different type and storage systems.

This section describes the null semantics for assigning null values to a DataColumn in a DataRow across the different type systems.

DBNull.Value
This assignment is valid for a DataColumn of any type. If the type implements INullable, DBNull.Value is coerced into the appropriate strongly typed Null value.

SqlType.Null
All System.Data.SqlTypes data types implement INullable. If the strongly typed null value can be converted into the column's data type using implicit cast operators, the assignment should go through. Otherwise an invalid cast exception is thrown.

null
If 'null' is a legal value for the given DataColumn data type, it is coerced into the appropriate DbNull.Value or Null associated with the INullable type (SqlType.Null)

derivedUdt.Null
For UDT columns, nulls are always stored based on the type associated with the DataColumn. Consider the case of a UDT associated with a DataColumn that does not implement INullable while its sub-class does. In this case, if a strongly typed null value associated with the derived class is assigned, it is stored as an untyped DbNull.Value, because null storage is always consistent with the DataColumn's data type.

Note

The Nullable<T> or Nullable structure is not currently supported in the DataSet.

Multiple Column (Row) Assignment

DataTable.Add, DataTable.LoadDataRow, or other APIs that accept an ItemArray that gets mapped to a row, map 'null' to the DataColumn's default value. If an object in the array contains DbNull.Value or its strongly typed counterpart, the same rules as described above are applied.

In addition, the following rules apply for an instance of DataRow.["columnName"] null assignments:

  1. The default default value is DbNull.Value for all except the strongly typed null columns where it is the appropriate strongly typed null value.

  2. Null values are never written out during serialization to XML files (as in "xsi:nil").

  3. All non-null values, including defaults, are always written out while serializing to XML. This is unlike XSD/XML semantics where a null value (xsi:nil) is explicit and the default value is implicit (if not present in XML, a validating parser can get it from an associated XSD schema). The opposite is true for a DataTable: a null value is implicit and the default value is explicit.

  4. All missing column values for rows read from XML input are assigned NULL. Rows created using NewRow or similar methods are assigned the DataColumn's default value.

  5. The IsNull method returns true for both DbNull.Value and INullable.Null.

Assigning Null Values

The default value for any System.Data.SqlTypes instance is null.

Nulls in System.Data.SqlTypes are type-specific and cannot be represented by a single value, such as DbNull. Use the IsNull property to check for nulls.

Null values can be assigned to a DataColumn as shown in the following code example. You can directly assign null values to SqlTypes variables without triggering an exception.

Example

The following code example creates a DataTable with two columns defined as SqlInt32 and SqlString. The code adds one row of known values, one row of null values and then iterates through the DataTable, assigning the values to variables and displaying the output in the console window.

C#
static private void WorkWithSqlNulls()
{
    DataTable table = new DataTable();

    // Specify the SqlType for each column.
    DataColumn idColumn =
        table.Columns.Add("ID", typeof(SqlInt32));
    DataColumn descColumn =
        table.Columns.Add("Description", typeof(SqlString));

    // Add some data.
    DataRow nRow = table.NewRow();
    nRow["ID"] = 123;
    nRow["Description"] = "Side Mirror";
    table.Rows.Add(nRow);

    // Add null values.
    nRow = table.NewRow();
    nRow["ID"] = SqlInt32.Null;
    nRow["Description"] = SqlString.Null;
    table.Rows.Add(nRow);

    // Initialize variables to use when
    // extracting the data.
    SqlBoolean isColumnNull = false;
    SqlInt32 idValue = SqlInt32.Zero;
    SqlString descriptionValue = SqlString.Null;

    // Iterate through the DataTable and display the values.
    foreach (DataRow row in table.Rows)
    {
        // Assign values to variables. Note that you 
        // do not have to test for null values.
        idValue = (SqlInt32)row["ID"];
        descriptionValue = (SqlString)row["Description"];

        // Test for null value in ID column.
        isColumnNull = idValue.IsNull;

        // Display variable values in console window.
        Console.Write("isColumnNull={0}, ID={1}, Description={2}",
            isColumnNull, idValue, descriptionValue);
        Console.WriteLine();
    }

This example displays the following results:

isColumnNull=False, ID=123, Description=Side Mirror  
isColumnNull=True, ID=Null, Description=Null  

Comparing Null Values with SqlTypes and CLR Types

When comparing null values, it is important to understand the difference between the way the Equals method evaluates null values in System.Data.SqlTypes as compared with the way it works with CLR types. All of the System.Data.SqlTypesEquals methods use database semantics for evaluating null values: if either or both of the values is null, the comparison yields null. On the other hand, using the CLR Equals method on two System.Data.SqlTypes will yield true if both are null. This reflects the difference between using an instance method such as the CLR String.Equals method, and using the static/shared method, SqlString.Equals.

The following example demonstrates the difference in results between the SqlString.Equals method and the String.Equals method when each is passed a pair of null values and then a pair of empty strings.

C#
    private static void CompareNulls()
    {
        // Create two new null strings.
        SqlString a = new SqlString();
        SqlString b = new SqlString();

        // Compare nulls using static/shared SqlString.Equals.
        Console.WriteLine("SqlString.Equals shared/static method:");
        Console.WriteLine("  Two nulls={0}", SqlStringEquals(a, b));

        // Compare nulls using instance method String.Equals.
        Console.WriteLine();
        Console.WriteLine("String.Equals instance method:");
        Console.WriteLine("  Two nulls={0}", StringEquals(a, b));

        // Make them empty strings.
        a = "";
        b = "";

        // When comparing two empty strings (""), both the shared/static and
        // the instance Equals methods evaluate to true.
        Console.WriteLine();
        Console.WriteLine("SqlString.Equals shared/static method:");
        Console.WriteLine("  Two empty strings={0}", SqlStringEquals(a, b));

        Console.WriteLine();
        Console.WriteLine("String.Equals instance method:");
        Console.WriteLine("  Two empty strings={0}", StringEquals(a, b));
    }
    
    private static string SqlStringEquals(SqlString string1, SqlString string2)
    {
        // SqlString.Equals uses database semantics for evaluating nulls.
        string returnValue = SqlString.Equals(string1, string2).ToString();
        return returnValue;
    }

    private static string StringEquals(SqlString string1, SqlString string2)
    {
        // String.Equals uses CLR type semantics for evaluating nulls.
        string returnValue = string1.Equals(string2).ToString();
        return returnValue;
    }
}

The code produces the following output:

SqlString.Equals shared/static method:  
  Two nulls=Null  
  
String.Equals instance method:  
  Two nulls=True  
  
SqlString.Equals shared/static method:  
  Two empty strings=True  
  
String.Equals instance method:  
  Two empty strings=True   

See also

Comparing GUID and uniqueidentifier Values

The globally unique identifier (GUID) data type in SQL Server is represented by the uniqueidentifier data type, which stores a 16-byte binary value. A GUID is a binary number, and its main use is as an identifier that must be unique in a network that has many computers at many sites. GUIDs can be generated by calling the Transact-SQL NEWID function, and is guaranteed to be unique throughout the world. For more information, see uniqueidentifier (Transact-SQL).

Working with SqlGuid Values

Because GUIDs values are long and obscure, they are not meaningful for users. If randomly generated GUIDs are used for key values and you insert a lot of rows, you get random I/O into your indexes, which can negatively impact performance. GUIDs are also relatively large when compared to other data types. In general we recommend using GUIDs only for very narrow scenarios for which no other data type is suitable.

Comparing GUID Values

Comparison operators can be used with uniqueidentifier values. However, ordering is not implemented by comparing the bit patterns of the two values. The only operations that are allowed against a uniqueidentifier value are comparisons (=, <>, <, >, <=, >=) and checking for NULL (IS NULL and IS NOT NULL). No other arithmetic operators are allowed.

Both Guid and SqlGuid have a CompareTo method for comparing different GUID values. However, System.Guid.CompareTo and SqlTypes.SqlGuid.CompareTo are implemented differently. SqlGuid implements CompareTo using SQL Server behavior, in the last six bytes of a value are most significant. Guid evaluates all 16 bytes. The following example demonstrates this behavioral difference. The first section of code displays unsorted Guid values, and the second section of code shows the sorted Guid values. The third section shows the sorted SqlGuid values. The output is displayed beneath the code listing.

C#
private static void WorkWithGuids()
{
    // Create an ArrayList and fill it with Guid values.
    ArrayList guidList = new ArrayList();
    guidList.Add(new Guid("3AAAAAAA-BBBB-CCCC-DDDD-2EEEEEEEEEEE"));
    guidList.Add(new Guid("2AAAAAAA-BBBB-CCCC-DDDD-1EEEEEEEEEEE"));
    guidList.Add(new Guid("1AAAAAAA-BBBB-CCCC-DDDD-3EEEEEEEEEEE"));

    // Display the unsorted Guid values.
    Console.WriteLine("Unsorted Guids:");
    foreach (Guid guidValue in guidList)
    {
        Console.WriteLine(" {0}", guidValue);
    }
    Console.WriteLine("");

    // Sort the Guids.
    guidList.Sort();

    // Display the sorted Guid values.
    Console.WriteLine("Sorted Guids:");
    foreach (Guid guidSorted in guidList)
    {
        Console.WriteLine(" {0}", guidSorted);
    }
    Console.WriteLine("");
    
    // Create an ArrayList of SqlGuids.
    ArrayList sqlGuidList = new ArrayList();
    sqlGuidList.Add(new SqlGuid("3AAAAAAA-BBBB-CCCC-DDDD-2EEEEEEEEEEE"));
    sqlGuidList.Add(new SqlGuid("2AAAAAAA-BBBB-CCCC-DDDD-1EEEEEEEEEEE"));
    sqlGuidList.Add(new SqlGuid("1AAAAAAA-BBBB-CCCC-DDDD-3EEEEEEEEEEE"));

    // Sort the SqlGuids. The unsorted SqlGuids are in the same order
    // as the unsorted Guid values.
    sqlGuidList.Sort();

    // Display the sorted SqlGuids. The sorted SqlGuid values are ordered
    // differently than the Guid values.
    Console.WriteLine("Sorted SqlGuids:");
    foreach (SqlGuid sqlGuidValue in sqlGuidList)
    {
        Console.WriteLine(" {0}", sqlGuidValue);
    }
}

This example produces the following results.

Unsorted Guids:  
3aaaaaaa-bbbb-cccc-dddd-2eeeeeeeeeee  
2aaaaaaa-bbbb-cccc-dddd-1eeeeeeeeeee  
1aaaaaaa-bbbb-cccc-dddd-3eeeeeeeeeee  
  
Sorted Guids:  
1aaaaaaa-bbbb-cccc-dddd-3eeeeeeeeeee  
2aaaaaaa-bbbb-cccc-dddd-1eeeeeeeeeee  
3aaaaaaa-bbbb-cccc-dddd-2eeeeeeeeeee  
  
Sorted SqlGuids:  
2aaaaaaa-bbbb-cccc-dddd-1eeeeeeeeeee  
3aaaaaaa-bbbb-cccc-dddd-2eeeeeeeeeee  
1aaaaaaa-bbbb-cccc-dddd-3eeeeeeeeeee  

See also

Date and Time Data

SQL Server 2008 introduces new data types for handling date and time information. The new data types include separate types for date and time, and expanded data types with greater range, precision, and time-zone awareness. Starting with the .NET Framework version 3.5 Service Pack (SP) 1, the .NET Framework Data Provider for SQL Server (System.Data.SqlClient) provides full support for all the new features of the SQL Server 2008 Database Engine. You must install the .NET Framework 3.5 SP1 (or later) to use these new features with SqlClient.

Versions of SQL Server earlier than SQL Server 2008 only had two data types for working with date and time values: datetime and smalldatetime. Both of these data types contain both the date value and a time value, which makes it difficult to work with only date or only time values. Also, these data types only support dates that occur after the introduction of the Gregorian calendar in England in 1753. Another limitation is that these older data types are not time-zone aware, which makes it difficult to work with data that originates from multiple time zones.

Complete documentation for SQL Server data types is available in SQL Server Books Online. The following table lists the version-specific entry-level topics for date and time data.

SQL Server Books Online

  1. Using Date and Time Data

Date/Time Data Types Introduced in SQL Server 2008

The following table describes the new date and time data types.

SQL Server data type Description
date The date data type has a range of January 1, 01 through December 31, 9999 with an accuracy of 1 day. The default value is January 1, 1900. The storage size is 3 bytes.
time The time data type stores time values only, based on a 24-hour clock. The time data type has a range of 00:00:00.0000000 through 23:59:59.9999999 with an accuracy of 100 nanoseconds. The default value is 00:00:00.0000000 (midnight). The time data type supports user-defined fractional second precision, and the storage size varies from 3 to 6 bytes, based on the precision specified.
datetime2 The datetime2 data type combines the range and precision of the date and time data types into a single data type.

The default values and string literal formats are the same as those defined in the date and time data types.
datetimeoffset The datetimeoffset data type has all the features of datetime2 with an additional time zone offset. The time zone offset is represented as [+|-] HH:MM. HH is 2 digits ranging from 00 to 14 that represent the number of hours in the time zone offset. MM is 2 digits ranging from 00 to 59 that represent the number of additional minutes in the time zone offset. Time formats are supported to 100 nanoseconds. The mandatory + or - sign indicates whether the time zone offset is added or subtracted from UTC (Universal Time Coordinate or Greenwich Mean Time) to obtain the local time.

Note

For more information about using the Type System Version keyword, see ConnectionString.

Date Format and Date Order

How SQL Server parses date and time values depends not only on the type system version and server version, but also on the server's default language and format settings. A date string that works for the date formats of one language might be unrecognizable if the query is executed by a connection that uses a different language and date format setting.

The Transact-SQL SET LANGUAGE statement implicitly sets the DATEFORMAT that determines the order of the date parts. You can use the SET DATEFORMAT Transact-SQL statement on a connection to disambiguate date values by ordering the date parts in MDY, DMY, YMD, YDM, MYD, or DYM order.

If you do not specify any DATEFORMAT for the connection, SQL Server uses the default language associated with the connection. For example, a date string of '01/02/03' would be interpreted as MDY (January 2, 2003) on a server with a language setting of United States English, and as DMY (February 1, 2003) on a server with a language setting of British English. The year is determined by using SQL Server's cutoff year rule, which defines the cutoff date for assigning the century value. For more information, see two digit year cutoff Option in SQL Server Books Online.

Note

The YDM date format is not supported when converting from a string format to date, time, datetime2, or datetimeoffset.

For more information about how SQL Server interprets date and time data, see Using Date and Time Data in SQL Server 2008 Books Online.

Date/Time Data Types and Parameters

The following enumerations have been added to SqlDbType to support the new date and time data types.

  • SqlDbType.Date

  • SqlDbType.Time

  • SqlDbType.DateTime2

  • SqlDbType.DateTimeOffSet

You can specify the data type of a SqlParameter by using one of the preceding SqlDbType enumerations.

Note

You cannot set the DbType property of a SqlParameter to SqlDbType.Date.

You can also specify the type of a SqlParameter generically by setting the DbType property of a SqlParameter object to a particular DbType enumeration value. The following enumeration values have been added to DbType to support the datetime2 and datetimeoffset data types:

  • DbType.DateTime2

  • DbType.DateTimeOffset

These new enumerations supplement the Date, Time, and DateTime enumerations, which existed in earlier versions of the .NET Framework.

The .NET Framework data provider type of a parameter object is inferred from the .NET Framework type of the value of the parameter object, or from the DbType of the parameter object. No new System.Data.SqlTypes data types have been introduced to support the new date and time data types. The following table describes the mappings between the SQL Server 2008 date and time data types and the CLR data types.

SQL Server data type .NET Framework type System.Data.SqlDbType System.Data.DbType
date System.DateTime Date Date
time System.TimeSpan Time Time
datetime2 System.DateTime DateTime2 DateTime2
datetimeoffset System.DateTimeOffset DateTimeOffset DateTimeOffset
datetime System.DateTime DateTime DateTime
smalldatetime System.DateTime DateTime DateTime

SqlParameter Properties

The following table describes SqlParameter properties that are relevant to date and time data types.

Property Description
IsNullable Gets or sets whether a value is nullable. When you send a null parameter value to the server, you must specify DBNull, rather than null (Nothing in Visual Basic). For more information about database nulls, see Handling Null Values.
Precision Gets or sets the maximum number of digits used to represent the value. This setting is ignored for date and time data types.
Scale Gets or sets the number of decimal places to which the time portion of the value is resolved for Time, DateTime2,and DateTimeOffset. The default value is 0, which means that the actual scale is inferred from the value and sent to the server.
Size Ignored for date and time data types.
Value Gets or sets the parameter value.
SqlValue Gets or sets the parameter value.

Note

Time values that are less than zero or greater than or equal to 24 hours will throw an ArgumentException.

Creating Parameters

You can create a SqlParameter object by using its constructor, or by adding it to a SqlCommandParameters collection by calling the Add method of the SqlParameterCollection. The Add method will take as input either constructor arguments or an existing parameter object.

The next sections in this topic provide examples of how to specify date and time parameters. For additional examples of working with parameters, see Configuring Parameters and Parameter Data Types and DataAdapter Parameters.

Date Example

The following code fragment demonstrates how to specify a date parameter.

C#
SqlParameter parameter = new SqlParameter();  
parameter.ParameterName = "@Date";  
parameter.SqlDbType = SqlDbType.Date;  
parameter.Value = "2007/12/1";  

Time Example

The following code fragment demonstrates how to specify a time parameter.

C#
SqlParameter parameter = new SqlParameter();  
parameter.ParameterName = "@time";  
parameter.SqlDbType = SqlDbType.Time;  
parameter.Value = DateTime.Parse("23:59:59").TimeOfDay;  

Datetime2 Example

The following code fragment demonstrates how to specify a datetime2 parameter with both the date and time parts.

C#
SqlParameter parameter = new SqlParameter();  
parameter.ParameterName = "@Datetime2";  
parameter.SqlDbType = SqlDbType.DateTime2;  
parameter.Value = DateTime.Parse("1666-09-02 1:00:00");  

DateTimeOffSet Example

The following code fragment demonstrates how to specify a DateTimeOffSet parameter with a date, a time, and a time zone offset of 0.

C#
SqlParameter parameter = new SqlParameter();  
parameter.ParameterName = "@DateTimeOffSet";  
parameter.SqlDbType = SqlDbType.DateTimeOffSet;  
parameter.Value = DateTimeOffset.Parse("1666-09-02 1:00:00+0");  

AddWithValue

You can also supply parameters by using the AddWithValue method of a SqlCommand, as shown in the following code fragment. However, the AddWithValue method does not allow you to specify the DbType or SqlDbType for the parameter.

C#
command.Parameters.AddWithValue(   
    "@date", DateTimeOffset.Parse("16660902"));  

The @date parameter could map to a date, datetime, or datetime2 data type on the server. When working with the new datetime data types, you must explicitly set the parameter's SqlDbType property to the data type of the instance. Using Variant or implicitly supplying parameter values can cause problems with backward compatibility with the datetime and smalldatetime data types.

The following table shows which SqlDbTypes are inferred from which CLR types:

CLR type Inferred SqlDbType
DateTime SqlDbType.DateTime
TimeSpan SqlDbType.Time
DateTimeOffset SqlDbType.DateTimeOffset

Retrieving Date and Time Data

The following table describes methods that are used to retrieve SQL Server 2008 date and time values.

SqlClient method Description
GetDateTime Retrieves the specified column value as a DateTime structure.
GetDateTimeOffset Retrieves the specified column value as a DateTimeOffset structure.
GetProviderSpecificFieldType Returns the type that is the underlying provider-specific type for the field. Returns the same types as GetFieldType for new date and time types.
GetProviderSpecificValue Retrieves the value of the specified column. Returns the same types as GetValue for the new date and time types.
GetProviderSpecificValues Retrieves the values in the specified array.
GetSqlString Retrieves the column value as a SqlString. An InvalidCastException occurs if the data cannot be expressed as a SqlString.
GetSqlValue Retrieves column data as its default SqlDbType. Returns the same types as GetValue for the new date and time types.
GetSqlValues Retrieves the values in the specified array.
GetString Retrieves the column value as a string if the Type System Version is set to SQL Server 2005. An InvalidCastException occurs if the data cannot be expressed as a string.
GetTimeSpan Retrieves the specified column value as a TimeSpan structure.
GetValue Retrieves the specified column value as its underlying CLR type.
GetValues Retrieves column values in an array.
GetSchemaTable Returns a DataTable that describes the metadata of the result set.

Note

The new date and time SqlDbTypes are not supported for code that is executing in-process in SQL Server. An exception will be raised if one of these types is passed to the server.

Specifying Date and Time Values as Literals

You can specify date and time data types by using a variety of different literal string formats, which SQL Server then evaluates at run time, converting them to internal date/time structures. SQL Server recognizes date and time data that is enclosed in single quotation marks ('). The following examples demonstrate some formats:

  • Alphabetic date formats, such as 'October 15, 2006'.

  • Numeric date formats, such as '10/15/2006'.

  • Unseparated string formats, such as '20061015', which would be interpreted as October 15, 2006 if you are using the ISO standard date format.

Note

You can find complete documentation for all of the literal string formats and other features of the date and time data types in SQL Server Books Online.

Time values that are less than zero or greater than or equal to 24 hours will throw an ArgumentException.

Resources in SQL Server 2008 Books Online

For more information about working with date and time values in SQL Server 2008, see the following resources in SQL Server 2008 Books Online.

Topic Description
Date and Time Data Types and Functions (Transact-SQL) Provides an overview of all Transact-SQL date and time data types and functions.
Using Date and Time Data Provides information about the date and time data types and functions, and examples of using them.
Data Types (Transact-SQL) Describes system data types in SQL Server 2008.

See also

Large UDTs

User-defined types (UDTs) allow a developer to extend the server's scalar type system by storing common language runtime (CLR) objects in a SQL Server database. UDTs can contain multiple elements and can have behaviors, unlike the traditional alias data types, which consist of a single SQL Server system data type.

Note

You must install the .NET Framework 3.5 SP1 (or later) to take advantage of the enhanced SqlClient support for large UDTs.

Previously, UDTs were restricted to a maximum size of 8 kilobytes. In SQL Server 2008, this restriction has been removed for UDTs that have a format of UserDefined.

For the complete documentation for user-defined types, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

  1. CLR User-Defined Types

Retrieving UDT Schemas Using GetSchema

The GetSchema method of SqlConnection returns database schema information in a DataTable. For more information, see SQL Server Schema Collections.

GetSchemaTable Column Values for UDTs

The GetSchemaTable method of a SqlDataReader returns a DataTable that describes column metadata. The following table describes the differences in the column metadata for large UDTs between SQL Server 2005 and SQL Server 2008.

SqlDataReader column SQL Server 2005 SQL Server 2008 and later
ColumnSize Varies Varies
NumericPrecision 255 255
NumericScale 255 255
DataType Byte[] UDT instance
ProviderSpecificDataType SqlTypes.SqlBinary UDT instance
ProviderType 21 (SqlDbType.VarBinary) 29 (SqlDbType.Udt)
NonVersionedProviderType 29 (SqlDbType.Udt) 29 (SqlDbType.Udt)
DataTypeName SqlDbType.VarBinary The three part name specified as Database.SchemaName.TypeName.
IsLong Varies Varies

SqlDataReader Considerations

The SqlDataReader has been extended beginning in SQL Server 2008 to support retrieving large UDT values. How large UDT values are processed by a SqlDataReader depends on the version of SQL Server you are using, as well as on the Type System Version specified in the connection string. For more information, see ConnectionString.

The following methods of SqlDataReader will return a SqlBinary instead of a UDT when the Type System Version is set to SQL Server 2005:

The following methods will return an array of Byte[] instead of a UDT when the Type System Version is set to SQL Server 2005:

Note that no conversions are made for the current version of ADO.NET.

Specifying SqlParameters

The following SqlParameter properties have been extended to work with large UDTs.

SqlParameter Property Description
Value Gets or sets an object that represents the value of the parameter. The default is null. The property can be SqlBinary, Byte[], or a managed object.
SqlValue Gets or sets an object that represents the value of the parameter. The default is null. The property can be SqlBinary, Byte[], or a managed object.
Size Gets or sets the size of the parameter value to resolve. The default value is 0. The property can be an integer that represents the size of the parameter value. For large UDTs, it can be the actual size of the UDT, or -1 for unknown.

Retrieving Data Example

The following code fragment demonstrates how to retrieve large UDT data. The connectionString variable assumes a valid connection to a SQL Server database and the commandString variable assumes a valid SELECT statement with the primary key column listed first.

C#
using (SqlConnection connection = new SqlConnection(   
    connectionString, commandString))  
{  
  connection.Open();  
  SqlCommand command = new SqlCommand(commandString);  
  SqlDataReader reader = command.ExecuteReader();  
  while (reader.Read())  
  {  
    // Retrieve the value of the Primary Key column.  
    int id = reader.GetInt32(0);  
  
    // Retrieve the value of the UDT.  
    LargeUDT udt = (LargeUDT)reader[1];  
  
    // You can also use GetSqlValue and GetValue.  
    // LargeUDT udt = (LargeUDT)reader.GetSqlValue(1);  
    // LargeUDT udt = (LargeUDT)reader.GetValue(1);  
  
    Console.WriteLine(  
     "ID={0} LargeUDT={1}", id, udt);  
  }  
reader.close  
}  

See also

XML Data in SQL Server

SQL Server exposes the functionality of SQLXML inside the .NET Framework. Developers can write applications that access XML data from an instance of SQL Server, bring the data into the .NET Framework environment, process the data, and send the updates back to SQL Server. XML data can be used in several ways in SQL Server, including data storage, and as parameter values for retrieving data. The SqlXml class in the .NET Framework provides the client-side support for working with data stored in an XML column within SQL Server. For more information, see "SQLXML Managed Classes" in SQL Server Books Online.

In This Section

SQL XML Column Values
Demonstrates how to retrieve and work with XML data retrieved from SQL Server.

Specifying XML Values as Parameters
Demonstrates how to pass XML data as a parameter to a command.

See also

SQL XML Column Values

SQL Server supports the xml data type, and developers can retrieve result sets including this type using standard behavior of the SqlCommand class. An xml column can be retrieved just as any column is retrieved (into a SqlDataReader, for example) but if you want to work with the content of the column as XML, you must use an XmlReader.

Example

The following console application selects two rows, each containing an xml column, from the Sales.Store table in the AdventureWorks database to a SqlDataReader instance. For each row, the value of the xml column is read using the GetSqlXml method of SqlDataReader. The value is stored in an XmlReader. Note that you must use GetSqlXml rather than the GetValue method if you want to set the contents to a SqlXml variable; GetValue returns the value of the xml column as a string.

Note

The AdventureWorks sample database is not installed by default when you install SQL Server. You can install it by running SQL Server Setup.

C#
// Example assumes the following directives:
//     using System.Data.SqlClient;
//     using System.Xml;
//     using System.Data.SqlTypes;

static void GetXmlData(string connectionString)
{
    using (SqlConnection connection = new SqlConnection(connectionString))
    {
        connection.Open();

        // The query includes two specific customers for simplicity's 
        // sake. A more realistic approach would use a parameter
        // for the CustomerID criteria. The example selects two rows
        // in order to demonstrate reading first from one row to 
        // another, then from one node to another within the xml column.
        string commandText =
            "SELECT Demographics from Sales.Store WHERE " +
            "CustomerID = 3 OR CustomerID = 4";

        SqlCommand commandSales = new SqlCommand(commandText, connection);

        SqlDataReader salesReaderData = commandSales.ExecuteReader();

        //  Multiple rows are returned by the SELECT, so each row
        //  is read and an XmlReader (an xml data type) is set to the 
        //  value of its first (and only) column. 
        int countRow = 1;
        while (salesReaderData.Read())
        //  Must use GetSqlXml here to get a SqlXml type. 
        //  GetValue returns a string instead of SqlXml. 
        {
            SqlXml salesXML =
                salesReaderData.GetSqlXml(0);
            XmlReader salesReaderXml = salesXML.CreateReader();
            Console.WriteLine("-----Row " + countRow + "-----");

            //  Move to the root. 
            salesReaderXml.MoveToContent();

            //  We know each node type is either Element or Text.
            //  All elements within the root are string values. 
            //  For this simple example, no elements are empty. 
            while (salesReaderXml.Read())
            {
                if (salesReaderXml.NodeType == XmlNodeType.Element)
                {
                    string elementLocalName =
                        salesReaderXml.LocalName;
                    salesReaderXml.Read();
                    Console.WriteLine(elementLocalName + ": " +
                        salesReaderXml.Value);
                }
            }
            countRow = countRow + 1;
        }
    }
}

See also

Specifying XML Values as Parameters

If a query requires a parameter whose value is an XML string, developers can supply that value using an instance of the SqlXml data type. There really are no tricks; XML columns in SQL Server accept parameter values in exactly the same way as other data types.

Example

The following console application creates a new table in the AdventureWorks database. The new table includes a column named SalesID and an XML column named SalesInfo.

Note

The AdventureWorks sample database is not installed by default when you install SQL Server. You can install it by running SQL Server Setup.

The example prepares a SqlCommand object to insert a row in the new table. A saved file provides the XML data needed for the SalesInfo column.

To create the file needed for the example to run, create a new text file in the same folder as your project. Name the file MyTestStoreData.xml. Open the file in Notepad and copy and paste the following text:

XML
<StoreSurvey xmlns="http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/StoreSurvey">  
  <AnnualSales>300000</AnnualSales>  
  <AnnualRevenue>30000</AnnualRevenue>  
  <BankName>International Bank</BankName>  
  <BusinessType>BM</BusinessType>  
  <YearOpened>1970</YearOpened>  
  <Specialty>Road</Specialty>  
  <SquareFeet>7000</SquareFeet>  
  <Brands>3</Brands>  
  <Internet>T1</Internet>  
  <NumberEmployees>2</NumberEmployees>  
</StoreSurvey>  
C#
using System;  
using System.Data;  
using System.Data.SqlClient;  
using System.Xml;  
using System.Data.SqlTypes;  
  
class Class1  
{  
    static void Main()  
    {  
        using (SqlConnection connection = new SqlConnection(GetConnectionString()))  
       {  
        connection.Open();  
        //  Create a sample table (dropping first if it already  
        //  exists.)  
  
        string commandNewTable =   
            "IF EXISTS (SELECT * FROM dbo.sysobjects " +   
            "WHERE id = " +  
                  "object_id(N'[dbo].[XmlDataTypeSample]') " +   
            "AND OBJECTPROPERTY(id, N'IsUserTable') = 1) " +   
            "DROP TABLE [dbo].[XmlDataTypeSample];" +   
            "CREATE TABLE [dbo].[XmlDataTypeSample](" +   
            "[SalesID] [int] IDENTITY(1,1) NOT NULL, " +   
            "[SalesInfo] [xml])";  
        SqlCommand commandAdd =   
                   new SqlCommand(commandNewTable, connection);  
        commandAdd.ExecuteNonQuery();  
        string commandText =   
            "INSERT INTO [dbo].[XmlDataTypeSample] " +   
            "([SalesInfo] ) " +   
            "VALUES(@xmlParameter )";  
        SqlCommand command =   
                  new SqlCommand(commandText, connection);  
  
        //  Read the saved XML document as a   
        //  SqlXml-data typed variable.  
        SqlXml newXml =   
            new SqlXml(new XmlTextReader("MyTestStoreData.xml"));  
  
        //  Supply the SqlXml value for the value of the parameter.  
        command.Parameters.AddWithValue("@xmlParameter", newXml);  
  
        int result = command.ExecuteNonQuery();  
        Console.WriteLine(result + " row was added.");  
        Console.WriteLine("Press Enter to continue.");  
        Console.ReadLine();  
    }  
  }  
  
    private static string GetConnectionString()  
    {  
        // To avoid storing the connection string in your code,              
        // you can retrieve it from a configuration file.   
        return "Data Source=(local);Integrated Security=true;" +  
        "Initial Catalog=AdventureWorks; ";  
    }  
}  

See also

SQL Server Binary and Large-Value Data

SQL Server provides the max specifier, which expands the storage capacity of the varchar, nvarchar, and varbinary data types. varchar(max), nvarchar(max), and varbinary(max) are collectively called large-value data types. You can use the large-value data types to store up to 2^31-1 bytes of data.

SQL Server 2008 introduces the FILESTREAM attribute, which is not a data type, but rather an attribute that can be defined on a column, allowing large-value data to be stored on the file system instead of in the database.

In This Section

Modifying Large-Value (max) Data in ADO.NET
Describes how to work with the large-value data types.

FILESTREAM Data
Describes how to work with large-value data stored in SQL Server 2008 with the FILESTREAM attribute.

See also

Modifying Large-Value (max) Data in ADO.NET

Large object (LOB) data types are those that exceed the maximum row size of 8 kilobytes (KB). SQL Server provides a max specifier for varchar, nvarchar, and varbinary data types to allow storage of values as large as 2^32 bytes. Table columns and Transact-SQL variables may specify varchar(max), nvarchar(max), or varbinary(max) data types. In ADO.NET, the max data types can be fetched by a DataReader, and can also be specified as both input and output parameter values without any special handling. For large varchar data types, data can be retrieved and updated incrementally.

The max data types can be used for comparisons, as Transact-SQL variables, and for concatenation. They can also be used in the DISTINCT, ORDER BY, GROUP BY clauses of a SELECT statement as well as in aggregates, joins, and subqueries.

The following table provides links to the documentation in SQL Server Books Online.

SQL Server Books Online

  1. Using Large-Value Data Types

Large-Value Type Restrictions

The following restrictions apply to the max data types, which do not exist for smaller data types:

  • A sql_variant cannot contain a large varchar data type.

  • Large varchar columns cannot be specified as a key column in an index. They are allowed in an included column in a non-clustered index.

  • Large varchar columns cannot be used as partitioning key columns.

Working with Large-Value Types in Transact-SQL

The Transact-SQL OPENROWSET function is a one-time method of connecting and accessing remote data. It includes all of the connection information necessary to access remote data from an OLE DB data source. OPENROWSET can be referenced in the FROM clause of a query as though it were a table name. It can also be referenced as the target table of an INSERT, UPDATE, or DELETE statement, subject to the capabilities of the OLE DB provider.

The OPENROWSET function includes the BULK rowset provider, which allows you to read data directly from a file without loading the data into a target table. This enables you to use OPENROWSET in a simple INSERT SELECT statement.

The OPENROWSET BULK option arguments provide significant control over where to begin and end reading data, how to deal with errors, and how data is interpreted. For example, you can specify that the data file be read as a single-row, single-column rowset of type varbinary, varchar, or nvarchar. For the complete syntax and options, see SQL Server Books Online.

The following example inserts a photo into the ProductPhoto table in the AdventureWorks sample database. When using the BULK OPENROWSET provider, you must supply the named list of columns even if you aren't inserting values into every column. The primary key in this case is defined as an identity column, and may be omitted from the column list. Note that you must also supply a correlation name at the end of the OPENROWSET statement, which in this case is ThumbnailPhoto. This correlates with the column in the ProductPhoto table into which the file is being loaded.

INSERT Production.ProductPhoto (  
    ThumbnailPhoto,   
    ThumbnailPhotoFilePath,   
    LargePhoto,   
    LargePhotoFilePath)  
SELECT ThumbnailPhoto.*, null, null, N'tricycle_pink.gif'  
FROM OPENROWSET   
    (BULK 'c:\images\tricycle.jpg', SINGLE_BLOB) ThumbnailPhoto  

Updating Data Using UPDATE .WRITE

The Transact-SQL UPDATE statement has new WRITE syntax for modifying the contents of varchar(max), nvarchar(max), or varbinary(max) columns. This allows you to perform partial updates of the data. The UPDATE .WRITE syntax is shown here in abbreviated form:

UPDATE

{ <object> }

SET

{ column_name = { .WRITE ( expression , @Offset , @Length ) }

The WRITE method specifies that a section of the value of the column_name will be modified. The expression is the value that will be copied to the column_name, the @Offset is the beginning point at which the expression will be written, and the @Length argument is the length of the section in the column.

If Then
The expression is set to NULL @Length is ignored and the value in column_name is truncated at the specified @Offset.
@Offset is NULL The update operation appends the expression at the end of the existing column_name value and @Length is ignored.
@Offset is greater than the length of the column_name value SQL Server returns an error.
@Length is NULL The update operation removes all data from @Offset to the end of the column_name value.

Note

Neither @Offset nor @Length can be a negative number.

Example

This Transact-SQL example updates a partial value in DocumentSummary, an nvarchar(max) column in the Document table in the AdventureWorks database. The word 'components' is replaced by the word 'features' by specifying the replacement word, the beginning location (offset) of the word to be replaced in the existing data, and the number of characters to be replaced (length). The example includes SELECT statements before and after the UPDATE statement to compare results.

USE AdventureWorks;  
GO  
--View the existing value.  
SELECT DocumentSummary  
FROM Production.Document  
WHERE DocumentID = 3;  
GO  
-- The first sentence of the results will be:  
-- Reflectors are vital safety components of your bicycle.  
  
--Modify a single word in the DocumentSummary column  
UPDATE Production.Document  
SET DocumentSummary .WRITE (N'features',28,10)  
WHERE DocumentID = 3 ;  
GO   
--View the modified value.  
SELECT DocumentSummary  
FROM Production.Document  
WHERE DocumentID = 3;  
GO  
-- The first sentence of the results will be:  
-- Reflectors are vital safety features of your bicycle.  

Working with Large-Value Types in ADO.NET

You can work with large value types in ADO.NET by specifying large value types as SqlParameter objects in a SqlDataReader to return a result set, or by using a SqlDataAdapter to fill a DataSet/DataTable. There is no difference between the way you work with a large value type and its related, smaller value data type.

Using GetSqlBytes to Retrieve Data

The GetSqlBytes method of the SqlDataReader can be used to retrieve the contents of a varbinary(max) column. The following code fragment assumes a SqlCommand object named cmd that selects varbinary(max) data from a table and a SqlDataReader object named reader that retrieves the data as SqlBytes.

C#
reader = cmd.ExecuteReader(CommandBehavior.CloseConnection);  
while (reader.Read())  
    {  
        SqlBytes bytes = reader.GetSqlBytes(0);  
    }  

Using GetSqlChars to Retrieve Data

The GetSqlChars method of the SqlDataReader can be used to retrieve the contents of a varchar(max) or nvarchar(max) column. The following code fragment assumes a SqlCommand object named cmd that selects nvarchar(max) data from a table and a SqlDataReader object named reader that retrieves the data.

C#
reader = cmd.ExecuteReader(CommandBehavior.CloseConnection);  
while (reader.Read())  
{  
    SqlChars buffer = reader.GetSqlChars(0);  
}  

Using GetSqlBinary to Retrieve Data

The GetSqlBinary method of a SqlDataReader can be used to retrieve the contents of a varbinary(max) column. The following code fragment assumes a SqlCommand object named cmd that selects varbinary(max) data from a table and a SqlDataReader object named reader that retrieves the data as a SqlBinary stream.

C#
reader = cmd.ExecuteReader(CommandBehavior.CloseConnection);  
while (reader.Read())  
    {  
        SqlBinary binaryStream = reader.GetSqlBinary(0);  
    }  

Using GetBytes to Retrieve Data

The GetBytes method of a SqlDataReader reads a stream of bytes from the specified column offset into a byte array starting at the specified array offset. The following code fragment assumes a SqlDataReader object named reader that retrieves bytes into a byte array. Note that, unlike GetSqlBytes, GetBytes requires a size for the array buffer.

C#
while (reader.Read())  
{  
    byte[] buffer = new byte[4000];  
    long byteCount = reader.GetBytes(1, 0, buffer, 0, 4000);  
}  

Using GetValue to Retrieve Data

The GetValue method of a SqlDataReader reads the value from the specified column offset into an array. The following code fragment assumes a SqlDataReader object named reader that retrieves binary data from the first column offset, and then string data from the second column offset.

C#
while (reader.Read())  
{  
    // Read the data from varbinary(max) column  
    byte[] binaryData = (byte[])reader.GetValue(0);  
  
    // Read the data from varchar(max) or nvarchar(max) column  
    String stringData = (String)reader.GetValue(1);  
}  

Converting from Large Value Types to CLR Types

You can convert the contents of a varchar(max) or nvarchar(max) column using any of the string conversion methods, such as ToString. The following code fragment assumes a SqlDataReader object named reader that retrieves the data.

C#
while (reader.Read())  
{  
     string str = reader[0].ToString();  
     Console.WriteLine(str);  
}  

Example

The following code retrieves the name and the LargePhoto object from the ProductPhoto table in the AdventureWorks database and saves it to a file. The assembly needs to be compiled with a reference to the System.Drawing namespace. The GetSqlBytes method of the SqlDataReader returns a SqlBytes object that exposes a Stream property. The code uses this to create a new Bitmap object, and then saves it in the Gif ImageFormat.

C#
static private void TestGetSqlBytes(int documentID, string filePath)
{
    // Assumes GetConnectionString returns a valid connection string.
    using (SqlConnection connection =
               new SqlConnection(GetConnectionString()))
    {
        SqlCommand command = connection.CreateCommand();
        SqlDataReader reader = null;
        try
        {
            // Setup the command
            command.CommandText =
                "SELECT LargePhotoFileName, LargePhoto "
                + "FROM Production.ProductPhoto "
                + "WHERE ProductPhotoID=@ProductPhotoID";
            command.CommandType = CommandType.Text;

            // Declare the parameter
            SqlParameter paramID =
                new SqlParameter("@ProductPhotoID", SqlDbType.Int);
            paramID.Value = documentID;
            command.Parameters.Add(paramID);
            connection.Open();

            string photoName = null;

            reader = command.ExecuteReader(CommandBehavior.CloseConnection);

            if (reader.HasRows)
            {
                while (reader.Read())
                {
                    // Get the name of the file.
                    photoName = reader.GetString(0);

                    // Ensure that the column isn't null
                    if (reader.IsDBNull(1))
                    {
                        Console.WriteLine("{0} is unavailable.", photoName);
                    }
                    else
                    {
                        SqlBytes bytes = reader.GetSqlBytes(1);
                        using (Bitmap productImage = new Bitmap(bytes.Stream))
                        {
                            String fileName = filePath + photoName;

                            // Save in gif format.
                            productImage.Save(fileName, ImageFormat.Gif);
                            Console.WriteLine("Successfully created {0}.", fileName);
                        }
                    }
                }
            }
            else
            {
                Console.WriteLine("No records returned.");
            }
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
        }
        finally
        {
            if (reader != null)
                reader.Dispose();
        }
    }
}

Using Large Value Type Parameters

Large value types can be used in SqlParameter objects the same way you use smaller value types in SqlParameter objects. You can retrieve large value types as SqlParameter values, as shown in the following example. The code assumes that the following GetDocumentSummary stored procedure exists in the AdventureWorks sample database. The stored procedure takes an input parameter named @DocumentID and returns the contents of the DocumentSummary column in the @DocumentSummary output parameter.

CREATE PROCEDURE GetDocumentSummary   
(  
    @DocumentID int,  
    @DocumentSummary nvarchar(MAX) OUTPUT  
)  
AS  
SET NOCOUNT ON  
SELECT  @DocumentSummary=Convert(nvarchar(MAX), DocumentSummary)  
FROM    Production.Document  
WHERE   DocumentID=@DocumentID  

Example

The ADO.NET code creates SqlConnection and SqlCommand objects to execute the GetDocumentSummary stored procedure and retrieve the document summary, which is stored as a large value type. The code passes a value for the @DocumentID input parameter, and displays the results passed back in the @DocumentSummary output parameter in the Console window.

C#
static private string GetDocumentSummary(int documentID)
{
    //Assumes GetConnectionString returns a valid connection string.
    using (SqlConnection connection =
               new SqlConnection(GetConnectionString()))
    {
        connection.Open();
        SqlCommand command = connection.CreateCommand();
        try
        {
            // Setup the command to execute the stored procedure.
            command.CommandText = "GetDocumentSummary";
            command.CommandType = CommandType.StoredProcedure;

            // Set up the input parameter for the DocumentID.
            SqlParameter paramID =
                new SqlParameter("@DocumentID", SqlDbType.Int);
            paramID.Value = documentID;
            command.Parameters.Add(paramID);

            // Set up the output parameter to retrieve the summary.
            SqlParameter paramSummary =
                new SqlParameter("@DocumentSummary",
                SqlDbType.NVarChar, -1);
            paramSummary.Direction = ParameterDirection.Output;
            command.Parameters.Add(paramSummary);

            // Execute the stored procedure.
            command.ExecuteNonQuery();
            Console.WriteLine((String)(paramSummary.Value));
            return (String)(paramSummary.Value);
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
            return null;
        }
    }
}

See also

FILESTREAM Data

The FILESTREAM storage attribute is for binary (BLOB) data stored in a varbinary(max) column. Before FILESTREAM, storing binary data required special handling. Unstructured data, such as text documents, images and video, is often stored outside of the database, making it difficult to manage.

Note

You must install the .NET Framework 3.5 SP1 (or later) to work with FILESTREAM data using SqlClient.

Specifying the FILESTREAM attribute on a varbinary(max) column causes SQL Server to store the data on the local NTFS file system instead of in the database file. Although it is stored separately, you can use the same Transact-SQL statements that are supported for working with varbinary(max) data that is stored in the database.

SqlClient Support for FILESTREAM

The .NET Framework Data Provider for SQL Server, System.Data.SqlClient, supports reading and writing to FILESTREAM data using the SqlFileStream class defined in the System.Data.SqlTypes namespace. SqlFileStream inherits from the Stream class, which provides methods for reading and writing to streams of data. Reading from a stream transfers data from the stream into a data structure, such as an array of bytes. Writing transfers the data from the data structure into a stream.

Creating the SQL Server Table

The following Transact-SQL statements creates a table named employees and inserts a row of data. Once you have enabled FILESTREAM storage, you can use this table in conjunction with the code examples that follow. The links to resources in SQL Server Books Online are located at the end of this topic.

SQL
CREATE TABLE employees
(
  EmployeeId INT  NOT NULL  PRIMARY KEY,
  Photo VARBINARY(MAX) FILESTREAM  NULL,
  RowGuid UNIQUEIDENTIFIER  NOT NULL  ROWGUIDCOL
  UNIQUE DEFAULT NEWID()
)
GO
Insert into employees
Values(1, 0x00, default)
GO

Example: Reading, Overwriting, and Inserting FILESTREAM Data

The following sample demonstrates how to read data from a FILESTREAM. The code gets the logical path to the file, setting the FileAccess to Read and the FileOptions to SequentialScan. The code then reads the bytes from the SqlFileStream into the buffer. The bytes are then written to the console window.

The sample also demonstrates how to write data to a FILESTREAM in which all existing data is overwritten. The code gets the logical path to the file and creates the SqlFileStream, setting the FileAccess to Write and the FileOptions to SequentialScan. A single byte is written to the SqlFileStream, replacing any data in the file.

The sample also demonstrates how to write data to a FILESTREAM by using the Seek method to append data to the end of the file. The code gets the logical path to the file and creates the SqlFileStream, setting the FileAccess to ReadWrite and the FileOptions to SequentialScan. The code uses the Seek method to seek to the end of the file, appending a single byte to the existing file.

C#
using System;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using System.Data;
using System.IO;

namespace FileStreamTest
{
    class Program
    {
        static void Main(string[] args)
        {
            SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder("server=(local);integrated security=true;database=myDB");
            ReadFileStream(builder);
            OverwriteFileStream(builder);
            InsertFileStream(builder);

            Console.WriteLine("Done");
        }

        private static void ReadFileStream(SqlConnectionStringBuilder connStringBuilder)
        {
            using (SqlConnection connection = new SqlConnection(connStringBuilder.ToString()))
            {
                connection.Open();
                SqlCommand command = new SqlCommand("SELECT TOP(1) Photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() FROM employees", connection);

                SqlTransaction tran = connection.BeginTransaction(IsolationLevel.ReadCommitted);
                command.Transaction = tran;

                using (SqlDataReader reader = command.ExecuteReader())
                {
                    while (reader.Read())
                    {
                        // Get the pointer for the file
                        string path = reader.GetString(0);
                        byte[] transactionContext = reader.GetSqlBytes(1).Buffer;

                        // Create the SqlFileStream
                        using (Stream fileStream = new SqlFileStream(path, transactionContext, FileAccess.Read, FileOptions.SequentialScan, allocationSize: 0))
                        {
                            // Read the contents as bytes and write them to the console
                            for (long index = 0; index < fileStream.Length; index++)
                            {
                                Console.WriteLine(fileStream.ReadByte());
                            }
                        }
                    }
                }
                tran.Commit();
            }
        }

        private static void OverwriteFileStream(SqlConnectionStringBuilder connStringBuilder)
        {
            using (SqlConnection connection = new SqlConnection(connStringBuilder.ToString()))
            {
                connection.Open();

                SqlCommand command = new SqlCommand("SELECT TOP(1) Photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() FROM employees", connection);

                SqlTransaction tran = connection.BeginTransaction(IsolationLevel.ReadCommitted);
                command.Transaction = tran;

                using (SqlDataReader reader = command.ExecuteReader())
                {
                    while (reader.Read())
                    {
                        // Get the pointer for file
                        string path = reader.GetString(0);
                        byte[] transactionContext = reader.GetSqlBytes(1).Buffer;

                        // Create the SqlFileStream
                        using (Stream fileStream = new SqlFileStream(path, transactionContext, FileAccess.Write, FileOptions.SequentialScan, allocationSize: 0))
                        {
                            // Write a single byte to the file. This will
                            // replace any data in the file.
                            fileStream.WriteByte(0x01);
                        }
                    }
                }
                tran.Commit();
            }
        }

        private static void InsertFileStream(SqlConnectionStringBuilder connStringBuilder)
        {
            using (SqlConnection connection = new SqlConnection(connStringBuilder.ToString()))
            {
                connection.Open();

                SqlCommand command = new SqlCommand("SELECT TOP(1) Photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() FROM employees", connection);

                SqlTransaction tran = connection.BeginTransaction(IsolationLevel.ReadCommitted);
                command.Transaction = tran;

                using (SqlDataReader reader = command.ExecuteReader())
                {
                    while (reader.Read())
                    {
                        // Get the pointer for file
                        string path = reader.GetString(0);
                        byte[] transactionContext = reader.GetSqlBytes(1).Buffer;

                        using (Stream fileStream = new SqlFileStream(path, transactionContext, FileAccess.ReadWrite, FileOptions.SequentialScan, allocationSize: 0))
                        {
                            // Seek to the end of the file
                            fileStream.Seek(0, SeekOrigin.End);

                            // Append a single byte
                            fileStream.WriteByte(0x01);
                        }
                    }
                }
                tran.Commit();
            }

        }
    }
}

For another sample, see How to store and fetch binary data into a file stream column.

Resources in SQL Server Books Online

The complete documentation for FILESTREAM is located in the following sections in SQL Server Books Online.

Topic Description
FILESTREAM (SQL Server) Describes when to use FILESTREAM storage and how it integrates the SQL Server Database Engine with an NTFS file system.
Create Client Applications for FILESTREAM Data Describes the Windows API functions for working with FILESTREAM data.
FILESTREAM and Other SQL Server Features Provides considerations, guidelines and limitations for using FILESTREAM data with other features of SQL Server.

See also

Inserting an Image from a File

You can write a binary large object (BLOB) to a database as either binary or character data, depending on the type of field at your data source. BLOB is a generic term that refers to the text, ntext, and image data types, which typically contain documents and pictures.

To write a BLOB value to your database, issue the appropriate INSERT or UPDATE statement and pass the BLOB value as an input parameter (see Configuring Parameters and Parameter Data Types). If your BLOB is stored as text, such as a SQL Server text field, you can pass the BLOB as a string parameter. If the BLOB is stored in binary format, such as a SQL Server image field, you can pass an array of type byte as a binary parameter.

Example

The following code example adds employee information to the Employees table in the Northwind database. A photo of the employee is read from a file and added to the Photo field in the table, which is an image field.

C#
public static void AddEmployee(  
  string lastName,   
  string firstName,   
  string title,   
  DateTime hireDate,   
  int reportsTo,   
  string photoFilePath,   
  string connectionString)  
{  
  byte[] photo = GetPhoto(photoFilePath);  
  
  using (SqlConnection connection = new SqlConnection(  
    connectionString))  
  
  SqlCommand command = new SqlCommand(  
    "INSERT INTO Employees (LastName, FirstName, " +  
    "Title, HireDate, ReportsTo, Photo) " +  
    "Values(@LastName, @FirstName, @Title, " +  
    "@HireDate, @ReportsTo, @Photo)", connection);   
  
  command.Parameters.Add("@LastName",    
     SqlDbType.NVarChar, 20).Value = lastName;  
  command.Parameters.Add("@FirstName",   
      SqlDbType.NVarChar, 10).Value = firstName;  
  command.Parameters.Add("@Title",       
      SqlDbType.NVarChar, 30).Value = title;  
  command.Parameters.Add("@HireDate",   
       SqlDbType.DateTime).Value = hireDate;  
  command.Parameters.Add("@ReportsTo",   
      SqlDbType.Int).Value = reportsTo;  
  
  command.Parameters.Add("@Photo",  
      SqlDbType.Image, photo.Length).Value = photo;  
  
  connection.Open();  
  command.ExecuteNonQuery();  
  }  
}  
  
public static byte[] GetPhoto(string filePath)  
{  
  FileStream stream = new FileStream(  
      filePath, FileMode.Open, FileAccess.Read);  
  BinaryReader reader = new BinaryReader(stream);  
  
  byte[] photo = reader.ReadBytes((int)stream.Length);  
  
  reader.Close();  
  stream.Close();  
  
  return photo;  
}  

See also

 

SQL Server Data Operations in ADO.NET

This section describes SQL Server features and functionality that are specific to the .NET Framework Data Provider for SQL Server (System.Data.SqlClient).

In This Section

Bulk Copy Operations in SQL Server
Describes the bulk copy functionality for the .NET Data Provider for SQL Server.

Multiple Active Result Sets (MARS)
Describes how to have more than one SqlDataReader open on a connection when each instance of SqlDataReader is started from a separate command.

Asynchronous Operations
Describes how to perform asynchronous database operations by using an API that is modeled after the asynchronous model used by the .NET Framework.

Table-Valued Parameters
Describes how to work with table-valued parameters, which were introduced in SQL Server 2008.

See also

Bulk Copy Operations in SQL Server

Microsoft SQL Server includes a popular command-line utility named bcp for quickly bulk copying large files into tables or views in SQL Server databases. The SqlBulkCopy class allows you to write managed code solutions that provide similar functionality. There are other ways to load data into a SQL Server table (INSERT statements, for example) but SqlBulkCopy offers a significant performance advantage over them.

The SqlBulkCopy class can be used to write data only to SQL Server tables. But the data source is not limited to SQL Server; any data source can be used, as long as the data can be loaded to a DataTable instance or read with a IDataReader instance.

Using the SqlBulkCopy class, you can perform:

  • A single bulk copy operation

  • Multiple bulk copy operations

  • A bulk copy operation within a transaction

Note

When using .NET Framework version 1.1 or earlier (which does not support the SqlBulkCopy class), you can execute the SQL Server Transact-SQL BULK INSERT statement using the SqlCommand object.

In This Section

Bulk Copy Example Setup
Describes the tables used in the bulk copy examples and provides SQL scripts for creating the tables in the AdventureWorks database.

Single Bulk Copy Operations
Describes how to do a single bulk copy of data into an instance of SQL Server using the SqlBulkCopy class, and how to perform the bulk copy operation using Transact-SQL statements and the SqlCommand class.

Multiple Bulk Copy Operations
Describes how to do multiple bulk copy operations of data into an instance of SQL Server using the SqlBulkCopy class.

Transaction and Bulk Copy Operations
Describes how to perform a bulk copy operation within a transaction, including how to commit or rollback the transaction.

See also

Bulk Copy Example Setup

The SqlBulkCopy class can be used to write data only to SQL Server tables. The code samples shown in this topic use the SQL Server sample database, AdventureWorks. To avoid altering the existing tables code samples write data to tables that you must create first.

The BulkCopyDemoMatchingColumns and BulkCopyDemoDifferentColumns tables are both based on the AdventureWorks Production.Products table. In code samples that use these tables, data is added from the Production.Products table to one of these sample tables. The BulkCopyDemoDifferentColumns table is used when the sample illustrates how to map columns from the source data to the destination table; BulkCopyDemoMatchingColumns is used for most other samples.

A few of the code samples demonstrate how to use one SqlBulkCopy class to write to multiple tables. For these samples, the BulkCopyDemoOrderHeader and BulkCopyDemoOrderDetail tables are used as the destination tables. These tables are based on the Sales.SalesOrderHeader and Sales.SalesOrderDetail tables in AdventureWorks.

Note

The SqlBulkCopy code samples are provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQL INSERT … SELECT statement to copy the data.

Table Setup

To create the tables necessary for the code samples to run correctly, you must run the following Transact-SQL statements in a SQL Server database.

USE AdventureWorks  
  
IF EXISTS (SELECT * FROM dbo.sysobjects   
 WHERE id = object_id(N'[dbo].[BulkCopyDemoMatchingColumns]')  
 AND OBJECTPROPERTY(id, N'IsUserTable') = 1)  
    DROP TABLE [dbo].[BulkCopyDemoMatchingColumns]  
  
CREATE TABLE [dbo].[BulkCopyDemoMatchingColumns]([ProductID] [int] IDENTITY(1,1) NOT NULL,  
    [Name] [nvarchar](50) NOT NULL,  
    [ProductNumber] [nvarchar](25) NOT NULL,  
 CONSTRAINT [PK_ProductID] PRIMARY KEY CLUSTERED  
(  
    [ProductID] ASC  
) ON [PRIMARY]) ON [PRIMARY]  
  
IF EXISTS (SELECT * FROM dbo.sysobjects   
 WHERE id = object_id(N'[dbo].[BulkCopyDemoDifferentColumns]')  
 AND OBJECTPROPERTY(id, N'IsUserTable') = 1)  
    DROP TABLE [dbo].[BulkCopyDemoDifferentColumns]  
  
CREATE TABLE [dbo].[BulkCopyDemoDifferentColumns]([ProdID] [int] IDENTITY(1,1) NOT NULL,  
    [ProdNum] [nvarchar](25) NOT NULL,  
    [ProdName] [nvarchar](50) NOT NULL,  
 CONSTRAINT [PK_ProdID] PRIMARY KEY CLUSTERED  
(  
    [ProdID] ASC  
) ON [PRIMARY]) ON [PRIMARY]  
  
IF EXISTS (SELECT * FROM dbo.sysobjects   
 WHERE id = object_id(N'[dbo].[BulkCopyDemoOrderHeader]')  
 AND OBJECTPROPERTY(id, N'IsUserTable') = 1)  
    DROP TABLE [dbo].[BulkCopyDemoOrderHeader]  
  
CREATE TABLE [dbo].[BulkCopyDemoOrderHeader]([SalesOrderID] [int] IDENTITY(1,1) NOT NULL,  
    [OrderDate] [datetime] NOT NULL,  
    [AccountNumber] [nvarchar](15) NULL,  
 CONSTRAINT [PK_SalesOrderID] PRIMARY KEY CLUSTERED  
(  
    [SalesOrderID] ASC  
) ON [PRIMARY]) ON [PRIMARY]  
  
IF EXISTS (SELECT * FROM dbo.sysobjects   
 WHERE id = object_id(N'[dbo].[BulkCopyDemoOrderDetail]')  
 AND OBJECTPROPERTY(id, N'IsUserTable') = 1)  
    DROP TABLE [dbo].[BulkCopyDemoOrderDetail]  
  
CREATE TABLE [dbo].[BulkCopyDemoOrderDetail]([SalesOrderID] [int] NOT NULL,  
    [SalesOrderDetailID] [int] NOT NULL,  
    [OrderQty] [smallint] NOT NULL,  
    [ProductID] [int] NOT NULL,  
    [UnitPrice] [money] NOT NULL,  
 CONSTRAINT [PK_LineNumber] PRIMARY KEY CLUSTERED  
(  
    [SalesOrderID] ASC,  
    [SalesOrderDetailID] ASC  
) ON [PRIMARY]) ON [PRIMARY]  

See also

Single Bulk Copy Operations

The simplest approach to performing a SQL Server bulk copy operation is to perform a single operation against a database. By default, a bulk copy operation is performed as an isolated operation: the copy operation occurs in a non-transacted way, with no opportunity for rolling it back.

Note

If you need to roll back all or part of the bulk copy when an error occurs, you can either use a SqlBulkCopy-managed transaction, or perform the bulk copy operation within an existing transaction. SqlBulkCopy will also work with System.Transactions if the connection is enlisted (implicitly or explicitly) into a System.Transactions transaction.

For more information, see Transaction and Bulk Copy Operations.

The general steps for performing a bulk copy operation are as follows:

  1. Connect to the source server and obtain the data to be copied. Data can also come from other sources, if it can be retrieved from an IDataReader or DataTable object.

  2. Connect to the destination server (unless you want SqlBulkCopy to establish a connection for you).

  3. Create a SqlBulkCopy object, setting any necessary properties.

  4. Set the DestinationTableName property to indicate the target table for the bulk insert operation.

  5. Call one of the WriteToServer methods.

  6. Optionally, update properties and call WriteToServer again as necessary.

  7. Call Close, or wrap the bulk copy operations within a Using statement.

Caution

We recommend that the source and target column data types match. If the data types do not match, SqlBulkCopy attempts to convert each source value to the target data type, using the rules employed by Value. Conversions can affect performance, and also can result in unexpected errors. For example, a Double data type can be converted to a Decimal data type most of the time, but not always.

Example

The following console application demonstrates how to load data using the SqlBulkCopy class. In this example, a SqlDataReader is used to copy data from the Production.Product table in the SQL Server AdventureWorks database to a similar table in the same database.

Important

This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQL INSERT … SELECT statement to copy the data.

C#
using System.Data.SqlClient;

class Program
{
    static void Main()
    {
        string connectionString = GetConnectionString();
        // Open a sourceConnection to the AdventureWorks database.
        using (SqlConnection sourceConnection =
                   new SqlConnection(connectionString))
        {
            sourceConnection.Open();

            // Perform an initial count on the destination table.
            SqlCommand commandRowCount = new SqlCommand(
                "SELECT COUNT(*) FROM " +
                "dbo.BulkCopyDemoMatchingColumns;",
                sourceConnection);
            long countStart = System.Convert.ToInt32(
                commandRowCount.ExecuteScalar());
            Console.WriteLine("Starting row count = {0}", countStart);

            // Get data from the source table as a SqlDataReader.
            SqlCommand commandSourceData = new SqlCommand(
                "SELECT ProductID, Name, " +
                "ProductNumber " +
                "FROM Production.Product;", sourceConnection);
            SqlDataReader reader =
                commandSourceData.ExecuteReader();

            // Open the destination connection. In the real world you would 
            // not use SqlBulkCopy to move data from one table to the other 
            // in the same database. This is for demonstration purposes only.
            using (SqlConnection destinationConnection =
                       new SqlConnection(connectionString))
            {
                destinationConnection.Open();

                // Set up the bulk copy object. 
                // Note that the column positions in the source
                // data reader match the column positions in 
                // the destination table so there is no need to
                // map columns.
                using (SqlBulkCopy bulkCopy =
                           new SqlBulkCopy(destinationConnection))
                {
                    bulkCopy.DestinationTableName =
                        "dbo.BulkCopyDemoMatchingColumns";

                    try
                    {
                        // Write from the source to the destination.
                        bulkCopy.WriteToServer(reader);
                    }
                    catch (Exception ex)
                    {
                        Console.WriteLine(ex.Message);
                    }
                    finally
                    {
                        // Close the SqlDataReader. The SqlBulkCopy
                        // object is automatically closed at the end
                        // of the using block.
                        reader.Close();
                    }
                }

                // Perform a final count on the destination 
                // table to see how many rows were added.
                long countEnd = System.Convert.ToInt32(
                    commandRowCount.ExecuteScalar());
                Console.WriteLine("Ending row count = {0}", countEnd);
                Console.WriteLine("{0} rows were added.", countEnd - countStart);
                Console.WriteLine("Press Enter to finish.");
                Console.ReadLine();
            }
        }
    }

    private static string GetConnectionString()
        // To avoid storing the sourceConnection string in your code, 
        // you can retrieve it from a configuration file. 
    {
        return "Data Source=(local); " +
            " Integrated Security=true;" +
            "Initial Catalog=AdventureWorks;";
    }
}

Performing a Bulk Copy Operation Using Transact-SQL and the Command Class

The following example illustrates how to use the ExecuteNonQuery method to execute the BULK INSERT statement.

Note

The file path for the data source is relative to the server. The server process must have access to that path in order for the bulk copy operation to succeed.

C#
using (SqlConnection connection = New SqlConnection(connectionString))  
{  
string queryString =  "BULK INSERT Northwind.dbo.[Order Details] " +  
    "FROM 'f:\mydata\data.tbl' " +  
    "WITH ( FORMATFILE='f:\mydata\data.fmt' )";  
connection.Open();  
SqlCommand command = new SqlCommand(queryString, connection);  
  
command.ExecuteNonQuery();  
}  

See also

Multiple Bulk Copy Operations

You can perform multiple bulk copy operations using a single instance of a SqlBulkCopy class. If the operation parameters change between copies (for example, the name of the destination table), you must update them prior to any subsequent calls to any of the WriteToServer methods, as demonstrated in the following example. Unless explicitly changed, all property values remain the same as they were on the previous bulk copy operation for a given instance.

Note

Performing multiple bulk copy operations using the same instance of SqlBulkCopy is usually more efficient than using a separate instance for each operation.

If you perform several bulk copy operations using the same SqlBulkCopy object, there are no restrictions on whether source or target information is equal or different in each operation. However, you must ensure that column association information is properly set each time you write to the server.

Important

This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQL INSERT … SELECT statement to copy the data.

C#
using System.Data.SqlClient;

class Program
{
    static void Main()
    {
        string connectionString = GetConnectionString();
        // Open a connection to the AdventureWorks database.
        using (SqlConnection connection =
                   new SqlConnection(connectionString))
        {
            connection.Open();

            // Empty the destination tables. 
            SqlCommand deleteHeader = new SqlCommand(
                "DELETE FROM dbo.BulkCopyDemoOrderHeader;",
                connection);
            deleteHeader.ExecuteNonQuery();
            SqlCommand deleteDetail = new SqlCommand(
                "DELETE FROM dbo.BulkCopyDemoOrderDetail;",
                connection);
            deleteDetail.ExecuteNonQuery();

            // Perform an initial count on the destination
            //  table with matching columns. 
            SqlCommand countRowHeader = new SqlCommand(
                "SELECT COUNT(*) FROM dbo.BulkCopyDemoOrderHeader;",
                connection);
            long countStartHeader = System.Convert.ToInt32(
                countRowHeader.ExecuteScalar());
            Console.WriteLine(
                "Starting row count for Header table = {0}",
                countStartHeader);

            // Perform an initial count on the destination
            // table with different column positions. 
            SqlCommand countRowDetail = new SqlCommand(
                "SELECT COUNT(*) FROM dbo.BulkCopyDemoOrderDetail;",
                connection);
            long countStartDetail = System.Convert.ToInt32(
                countRowDetail.ExecuteScalar());
            Console.WriteLine(
                "Starting row count for Detail table = {0}",
                countStartDetail);

            // Get data from the source table as a SqlDataReader.
            // The Sales.SalesOrderHeader and Sales.SalesOrderDetail
            // tables are quite large and could easily cause a timeout
            // if all data from the tables is added to the destination. 
            // To keep the example simple and quick, a parameter is  
            // used to select only orders for a particular account 
            // as the source for the bulk insert. 
            SqlCommand headerData = new SqlCommand(
                "SELECT [SalesOrderID], [OrderDate], " +
                "[AccountNumber] FROM [Sales].[SalesOrderHeader] " +
                "WHERE [AccountNumber] = @accountNumber;",
                connection);
            SqlParameter parameterAccount = new SqlParameter();
            parameterAccount.ParameterName = "@accountNumber";
            parameterAccount.SqlDbType = SqlDbType.NVarChar;
            parameterAccount.Direction = ParameterDirection.Input;
            parameterAccount.Value = "10-4020-000034";
            headerData.Parameters.Add(parameterAccount);
            SqlDataReader readerHeader = headerData.ExecuteReader();

            // Get the Detail data in a separate connection.
            using (SqlConnection connection2 = new SqlConnection(connectionString))
            {
                connection2.Open();
                SqlCommand sourceDetailData = new SqlCommand(
                    "SELECT [Sales].[SalesOrderDetail].[SalesOrderID], [SalesOrderDetailID], " +
                    "[OrderQty], [ProductID], [UnitPrice] FROM [Sales].[SalesOrderDetail] " +
                    "INNER JOIN [Sales].[SalesOrderHeader] ON [Sales].[SalesOrderDetail]." +
                    "[SalesOrderID] = [Sales].[SalesOrderHeader].[SalesOrderID] " +
                    "WHERE [AccountNumber] = @accountNumber;", connection2);

                SqlParameter accountDetail = new SqlParameter();
                accountDetail.ParameterName = "@accountNumber";
                accountDetail.SqlDbType = SqlDbType.NVarChar;
                accountDetail.Direction = ParameterDirection.Input;
                accountDetail.Value = "10-4020-000034";
                sourceDetailData.Parameters.Add(accountDetail);
                SqlDataReader readerDetail = sourceDetailData.ExecuteReader();

                // Create the SqlBulkCopy object. 
                using (SqlBulkCopy bulkCopy =
                           new SqlBulkCopy(connectionString))
                {
                    bulkCopy.DestinationTableName =
                        "dbo.BulkCopyDemoOrderHeader";

                    // Guarantee that columns are mapped correctly by
                    // defining the column mappings for the order.
                    bulkCopy.ColumnMappings.Add("SalesOrderID", "SalesOrderID");
                    bulkCopy.ColumnMappings.Add("OrderDate", "OrderDate");
                    bulkCopy.ColumnMappings.Add("AccountNumber", "AccountNumber");

                    // Write readerHeader to the destination.
                    try
                    {
                        bulkCopy.WriteToServer(readerHeader);
                    }
                    catch (Exception ex)
                    {
                        Console.WriteLine(ex.Message);
                    }
                    finally
                    {
                        readerHeader.Close();
                    }

                    // Set up the order details destination. 
                    bulkCopy.DestinationTableName ="dbo.BulkCopyDemoOrderDetail";

                    // Clear the ColumnMappingCollection.
                    bulkCopy.ColumnMappings.Clear();

                    // Add order detail column mappings.
                    bulkCopy.ColumnMappings.Add("SalesOrderID", "SalesOrderID");
                    bulkCopy.ColumnMappings.Add("SalesOrderDetailID", "SalesOrderDetailID");
                    bulkCopy.ColumnMappings.Add("OrderQty", "OrderQty");
                    bulkCopy.ColumnMappings.Add("ProductID", "ProductID");
                    bulkCopy.ColumnMappings.Add("UnitPrice", "UnitPrice");

                    // Write readerDetail to the destination.
                    try
                    {
                        bulkCopy.WriteToServer(readerDetail);
                    }
                    catch (Exception ex)
                    {
                        Console.WriteLine(ex.Message);
                    }
                    finally
                    {
                        readerDetail.Close();
                    }
                }

                // Perform a final count on the destination
                // tables to see how many rows were added. 
                long countEndHeader = System.Convert.ToInt32(
                    countRowHeader.ExecuteScalar());
                Console.WriteLine("{0} rows were added to the Header table.",
                    countEndHeader - countStartHeader);
                long countEndDetail = System.Convert.ToInt32(
                    countRowDetail.ExecuteScalar());
                Console.WriteLine("{0} rows were added to the Detail table.",
                    countEndDetail - countStartDetail);
                Console.WriteLine("Press Enter to finish.");
                Console.ReadLine();
            }
        }
    }

    private static string GetConnectionString()
        // To avoid storing the connection string in your code, 
        // you can retrieve it from a configuration file. 
    {
        return "Data Source=(local); " +
            " Integrated Security=true;" +
            "Initial Catalog=AdventureWorks;";
    }
}

See also

Transaction and Bulk Copy Operations

Bulk copy operations can be performed as isolated operations or as part of a multiple step transaction. This latter option enables you to perform more than one bulk copy operation within the same transaction, as well as perform other database operations (such as inserts, updates, and deletes) while still being able to commit or roll back the entire transaction.

By default, a bulk copy operation is performed as an isolated operation. The bulk copy operation occurs in a non-transacted way, with no opportunity for rolling it back. If you need to roll back all or part of the bulk copy when an error occurs, you can use a SqlBulkCopy-managed transaction, perform the bulk copy operation within an existing transaction, or be enlisted in a System.TransactionsTransaction.

Performing a Non-transacted Bulk Copy Operation

The following Console application shows what happens when a non-transacted bulk copy operation encounters an error partway through the operation.

In the example, the source table and destination table each include an Identity column named ProductID. The code first prepares the destination table by deleting all rows and then inserting a single row whose ProductID is known to exist in the source table. By default, a new value for the Identity column is generated in the destination table for each row added. In this example, an option is set when the connection is opened that forces the bulk load process to use the Identity values from the source table instead.

The bulk copy operation is executed with the BatchSize property set to 10. When the operation encounters the invalid row, an exception is thrown. In this first example, the bulk copy operation is non-transacted. All batches copied up to the point of the error are committed; the batch containing the duplicate key is rolled back, and the bulk copy operation is halted before processing any other batches.

Note

This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQLINSERT … SELECT statement to copy the data.

C#
using System.Data.SqlClient;

class Program
{
    static void Main()
    {
        string connectionString = GetConnectionString();
        // Open a sourceConnection to the AdventureWorks database.
        using (SqlConnection sourceConnection =
                   new SqlConnection(connectionString))
        {
            sourceConnection.Open();

            //  Delete all from the destination table.         
            SqlCommand commandDelete = new SqlCommand();
            commandDelete.Connection = sourceConnection;
            commandDelete.CommandText =
                "DELETE FROM dbo.BulkCopyDemoMatchingColumns";
            commandDelete.ExecuteNonQuery();

            //  Add a single row that will result in duplicate key         
            //  when all rows from source are bulk copied.         
            //  Note that this technique will only be successful in          
            //  illustrating the point if a row with ProductID = 446           
            //  exists in the AdventureWorks Production.Products table.          
            //  If you have made changes to the data in this table, change         
            //  the SQL statement in the code to add a ProductID that         
            //  does exist in your version of the Production.Products         
            //  table. Choose any ProductID in the middle of the table         
            //  (not first or last row) to best illustrate the result.         
            SqlCommand commandInsert = new SqlCommand();
            commandInsert.Connection = sourceConnection;
            commandInsert.CommandText =
                "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns ON;" +
                "INSERT INTO " + "dbo.BulkCopyDemoMatchingColumns " +
                "([ProductID], [Name] ,[ProductNumber]) " +
                "VALUES(446, 'Lock Nut 23','LN-3416');" +
                "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns OFF";
            commandInsert.ExecuteNonQuery();

            // Perform an initial count on the destination table.
            SqlCommand commandRowCount = new SqlCommand(
                "SELECT COUNT(*) FROM dbo.BulkCopyDemoMatchingColumns;",
                sourceConnection);
            long countStart = System.Convert.ToInt32(
                commandRowCount.ExecuteScalar());
            Console.WriteLine("Starting row count = {0}", countStart);

            //  Get data from the source table as a SqlDataReader.         
            SqlCommand commandSourceData = new SqlCommand(
                "SELECT ProductID, Name, ProductNumber " +
                "FROM Production.Product;", sourceConnection);
            SqlDataReader reader = commandSourceData.ExecuteReader();

            // Set up the bulk copy object using the KeepIdentity option. 
            using (SqlBulkCopy bulkCopy = new SqlBulkCopy(
                       connectionString, SqlBulkCopyOptions.KeepIdentity))
            {
                bulkCopy.BatchSize = 10;
                bulkCopy.DestinationTableName =
                    "dbo.BulkCopyDemoMatchingColumns";

                // Write from the source to the destination.
                // This should fail with a duplicate key error
                // after some of the batches have been copied.
                try
                {
                    bulkCopy.WriteToServer(reader);
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.Message);
                }
                finally
                {
                    reader.Close();
                }
            }

            // Perform a final count on the destination 
            // table to see how many rows were added.
            long countEnd = System.Convert.ToInt32(
                commandRowCount.ExecuteScalar());
            Console.WriteLine("Ending row count = {0}", countEnd);
            Console.WriteLine("{0} rows were added.", countEnd - countStart);
            Console.WriteLine("Press Enter to finish.");
            Console.ReadLine();
        }
    }

    private static string GetConnectionString()
        // To avoid storing the sourceConnection string in your code, 
        // you can retrieve it from a configuration file. 
    {
        return "Data Source=(local); " +
            " Integrated Security=true;" +
            "Initial Catalog=AdventureWorks;";
    }
}

Performing a Dedicated Bulk Copy Operation in a Transaction

By default, a bulk copy operation is its own transaction. When you want to perform a dedicated bulk copy operation, create a new instance of SqlBulkCopy with a connection string, or use an existing SqlConnection object without an active transaction. In each scenario, the bulk copy operation creates, and then commits or rolls back the transaction.

You can explicitly specify the UseInternalTransaction option in the SqlBulkCopy class constructor to explicitly cause a bulk copy operation to execute in its own transaction, causing each batch of the bulk copy operation to execute within a separate transaction.

Note

Since different batches are executed in different transactions, if an error occurs during the bulk copy operation, all the rows in the current batch will be rolled back, but rows from previous batches will remain in the database.

The following console application is similar to the previous example, with one exception: In this example, the bulk copy operation manages its own transactions. All batches copied up to the point of the error are committed; the batch containing the duplicate key is rolled back, and the bulk copy operation is halted before processing any other batches.

Important

This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQLINSERT … SELECT statement to copy the data.

C#
using System.Data.SqlClient;

class Program
{
    static void Main()
    {
        string connectionString = GetConnectionString();
        // Open a sourceConnection to the AdventureWorks database.
        using (SqlConnection sourceConnection =
                   new SqlConnection(connectionString))
        {
            sourceConnection.Open();

            //  Delete all from the destination table.         
            SqlCommand commandDelete = new SqlCommand();
            commandDelete.Connection = sourceConnection;
            commandDelete.CommandText =
                "DELETE FROM dbo.BulkCopyDemoMatchingColumns";
            commandDelete.ExecuteNonQuery();

            //  Add a single row that will result in duplicate key         
            //  when all rows from source are bulk copied.         
            //  Note that this technique will only be successful in          
            //  illustrating the point if a row with ProductID = 446           
            //  exists in the AdventureWorks Production.Products table.          
            //  If you have made changes to the data in this table, change         
            //  the SQL statement in the code to add a ProductID that         
            //  does exist in your version of the Production.Products         
            //  table. Choose any ProductID in the middle of the table         
            //  (not first or last row) to best illustrate the result.         
            SqlCommand commandInsert = new SqlCommand();
            commandInsert.Connection = sourceConnection;
            commandInsert.CommandText =
                "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns ON;" +
                "INSERT INTO " + "dbo.BulkCopyDemoMatchingColumns " +
                "([ProductID], [Name] ,[ProductNumber]) " +
                "VALUES(446, 'Lock Nut 23','LN-3416');" +
                "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns OFF";
            commandInsert.ExecuteNonQuery();

            // Perform an initial count on the destination table.
            SqlCommand commandRowCount = new SqlCommand(
                "SELECT COUNT(*) FROM dbo.BulkCopyDemoMatchingColumns;",
                sourceConnection);
            long countStart = System.Convert.ToInt32(
                commandRowCount.ExecuteScalar());
            Console.WriteLine("Starting row count = {0}", countStart);

            //  Get data from the source table as a SqlDataReader.         
            SqlCommand commandSourceData = new SqlCommand(
                "SELECT ProductID, Name, ProductNumber " +
                "FROM Production.Product;", sourceConnection);
            SqlDataReader reader = commandSourceData.ExecuteReader();

            // Set up the bulk copy object.
            // Note that when specifying the UseInternalTransaction
            // option, you cannot also specify an external transaction.
            // Therefore, you must use the SqlBulkCopy construct that
            // requires a string for the connection, rather than an
            // existing SqlConnection object. 
            using (SqlBulkCopy bulkCopy = new SqlBulkCopy(
                       connectionString, SqlBulkCopyOptions.KeepIdentity |
                       SqlBulkCopyOptions.UseInternalTransaction))
            {
                bulkCopy.BatchSize = 10;
                bulkCopy.DestinationTableName =
                    "dbo.BulkCopyDemoMatchingColumns";

                // Write from the source to the destination.
                // This should fail with a duplicate key error
                // after some of the batches have been copied.
                try
                {
                    bulkCopy.WriteToServer(reader);
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.Message);
                }
                finally
                {
                    reader.Close();
                }
            }

            // Perform a final count on the destination 
            // table to see how many rows were added.
            long countEnd = System.Convert.ToInt32(
                commandRowCount.ExecuteScalar());
            Console.WriteLine("Ending row count = {0}", countEnd);
            Console.WriteLine("{0} rows were added.", countEnd - countStart);
            Console.WriteLine("Press Enter to finish.");
            Console.ReadLine();
        }
    }

    private static string GetConnectionString()
        // To avoid storing the sourceConnection string in your code, 
        // you can retrieve it from a configuration file. 
    {
        return "Data Source=(local); " +
            " Integrated Security=true;" +
            "Initial Catalog=AdventureWorks;";
    }
}

Using Existing Transactions

You can specify an existing SqlTransaction object as a parameter in a SqlBulkCopy constructor. In this situation, the bulk copy operation is performed in an existing transaction, and no change is made to the transaction state (that is, it is neither committed nor aborted). This allows an application to include the bulk copy operation in a transaction with other database operations. However, if you do not specify a SqlTransaction object and pass a null reference, and the connection has an active transaction, an exception is thrown.

If you need to roll back the entire bulk copy operation because an error occurs, or if the bulk copy should execute as part of a larger process that can be rolled back, you can provide a SqlTransaction object to the SqlBulkCopy constructor.

The following console application is similar to the first (non-transacted) example, with one exception: in this example, the bulk copy operation is included in a larger, external transaction. When the primary key violation error occurs, the entire transaction is rolled back and no rows are added to the destination table.

Important

This sample will not run unless you have created the work tables as described in Bulk Copy Example Setup. This code is provided to demonstrate the syntax for using SqlBulkCopy only. If the source and destination tables are located in the same SQL Server instance, it is easier and faster to use a Transact-SQLINSERT … SELECT statement to copy the data.

C#
using System.Data.SqlClient;

class Program
{
    static void Main()
    {
        string connectionString = GetConnectionString();
        // Open a sourceConnection to the AdventureWorks database.
        using (SqlConnection sourceConnection =
                   new SqlConnection(connectionString))
        {
            sourceConnection.Open();

            //  Delete all from the destination table.         
            SqlCommand commandDelete = new SqlCommand();
            commandDelete.Connection = sourceConnection;
            commandDelete.CommandText =
                "DELETE FROM dbo.BulkCopyDemoMatchingColumns";
            commandDelete.ExecuteNonQuery();

            //  Add a single row that will result in duplicate key         
            //  when all rows from source are bulk copied.         
            //  Note that this technique will only be successful in          
            //  illustrating the point if a row with ProductID = 446           
            //  exists in the AdventureWorks Production.Products table.          
            //  If you have made changes to the data in this table, change         
            //  the SQL statement in the code to add a ProductID that         
            //  does exist in your version of the Production.Products         
            //  table. Choose any ProductID in the middle of the table         
            //  (not first or last row) to best illustrate the result.         
            SqlCommand commandInsert = new SqlCommand();
            commandInsert.Connection = sourceConnection;
            commandInsert.CommandText =
                "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns ON;" +
                "INSERT INTO " + "dbo.BulkCopyDemoMatchingColumns " +
                "([ProductID], [Name] ,[ProductNumber]) " +
                "VALUES(446, 'Lock Nut 23','LN-3416');" +
                "SET IDENTITY_INSERT dbo.BulkCopyDemoMatchingColumns OFF";
            commandInsert.ExecuteNonQuery();

            // Perform an initial count on the destination table.
            SqlCommand commandRowCount = new SqlCommand(
                "SELECT COUNT(*) FROM dbo.BulkCopyDemoMatchingColumns;",
                sourceConnection);
            long countStart = System.Convert.ToInt32(
                commandRowCount.ExecuteScalar());
            Console.WriteLine("Starting row count = {0}", countStart);

            //  Get data from the source table as a SqlDataReader.         
            SqlCommand commandSourceData = new SqlCommand(
                "SELECT ProductID, Name, ProductNumber " +
                "FROM Production.Product;", sourceConnection);
            SqlDataReader reader = commandSourceData.ExecuteReader();

            //Set up the bulk copy object inside the transaction. 
            using (SqlConnection destinationConnection =
                       new SqlConnection(connectionString))
            {
                destinationConnection.Open();

                using (SqlTransaction transaction =
                           destinationConnection.BeginTransaction())
                {
                    using (SqlBulkCopy bulkCopy = new SqlBulkCopy(
                               destinationConnection, SqlBulkCopyOptions.KeepIdentity,
                               transaction))
                    {
                        bulkCopy.BatchSize = 10;
                        bulkCopy.DestinationTableName =
                            "dbo.BulkCopyDemoMatchingColumns";

                        // Write from the source to the destination.
                        // This should fail with a duplicate key error.
                        try
                        {
                            bulkCopy.WriteToServer(reader);
                            transaction.Commit();
                        }
                        catch (Exception ex)
                        {
                            Console.WriteLine(ex.Message);
                            transaction.Rollback();
                        }
                        finally
                        {
                            reader.Close();
                        }
                    }
                }
            }

            // Perform a final count on the destination 
            // table to see how many rows were added.
            long countEnd = System.Convert.ToInt32(
                commandRowCount.ExecuteScalar());
            Console.WriteLine("Ending row count = {0}", countEnd);
            Console.WriteLine("{0} rows were added.", countEnd - countStart);
            Console.WriteLine("Press Enter to finish.");
            Console.ReadLine();
        }
    }

    private static string GetConnectionString()
        // To avoid storing the sourceConnection string in your code, 
        // you can retrieve it from a configuration file. 
    {
        return "Data Source=(local); " +
            " Integrated Security=true;" +
            "Initial Catalog=AdventureWorks;";
    }
}

See also

Multiple Active Result Sets (MARS)

Multiple Active Result Sets (MARS) is a feature that allows the execution of multiple batches on a single connection. In previous versions, only one batch could be executed at a time against a single connection. Executing multiple batches with MARS does not imply simultaneous execution of operations.

In This Section

Enabling Multiple Active Result Sets
Discusses how to use MARS with SQL Server.

Manipulating Data
Provides examples of coding MARS applications.

Related Sections

Asynchronous Operations
Provides details on using the new asynchronous features in ADO.NET.

See also

Enabling Multiple Active Result Sets

Multiple Active Result Sets (MARS) is a feature that works with SQL Server to allow the execution of multiple batches on a single connection. When MARS is enabled for use with SQL Server, each command object used adds a session to the connection.

Note

A single MARS session opens one logical connection for MARS to use and then one logical connection for each active command.

Enabling and Disabling MARS in the Connection String

Note

The following connection strings use the sample AdventureWorks database included with SQL Server. The connection strings provided assume that the database is installed on a server named MSSQL1. Modify the connection string as necessary for your environment.

The MARS feature is disabled by default. It can be enabled by adding the "MultipleActiveResultSets=True" keyword pair to your connection string. "True" is the only valid value for enabling MARS. The following example demonstrates how to connect to an instance of SQL Server and how to specify that MARS should be enabled.

C#
string connectionString = "Data Source=MSSQL1;" +   
    "Initial Catalog=AdventureWorks;Integrated Security=SSPI;" +  
    "MultipleActiveResultSets=True";  

You can disable MARS by adding the "MultipleActiveResultSets=False" keyword pair to your connection string. "False" is the only valid value for disabling MARS. The following connection string demonstrates how to disable MARS.

C#
string connectionString = "Data Source=MSSQL1;" +   
    "Initial Catalog=AdventureWorks;Integrated Security=SSPI;" +  
    "MultipleActiveResultSets=False";  

Special Considerations When Using MARS

In general, existing applications should not need modification to use a MARS-enabled connection. However, if you wish to use MARS features in your applications, you should understand the following special considerations.

Statement Interleaving

MARS operations execute synchronously on the server. Statement interleaving of SELECT and BULK INSERT statements is allowed. However, data manipulation language (DML) and data definition language (DDL) statements execute atomically. Any statements attempting to execute while an atomic batch is executing are blocked. Parallel execution at the server is not a MARS feature.

If two batches are submitted under a MARS connection, one of them containing a SELECT statement, the other containing a DML statement, the DML can begin execution within execution of the SELECT statement. However, the DML statement must run to completion before the SELECT statement can make progress. If both statements are running under the same transaction, any changes made by a DML statement after the SELECT statement has started execution are not visible to the read operation.

A WAITFOR statement inside a SELECT statement does not yield the transaction while it is waiting, that is, until the first row is produced. This implies that no other batches can execute within the same connection while a WAITFOR statement is waiting.

MARS Session Cache

When a connection is opened with MARS enabled, a logical session is created, which adds additional overhead. To minimize overhead and enhance performance, SqlClient caches the MARS session within a connection. The cache contains at most 10 MARS sessions. This value is not user adjustable. If the session limit is reached, a new session is created—an error is not generated. The cache and sessions contained in it are per-connection; they are not shared across connections. When a session is released, it is returned to the pool unless the pool's upper limit has been reached. If the cache pool is full, the session is closed. MARS sessions do not expire. They are only cleaned up when the connection object is disposed. The MARS session cache is not preloaded. It is loaded as the application requires more sessions.

Thread Safety

MARS operations are not thread-safe.

Connection Pooling

MARS-enabled connections are pooled like any other connection. If an application opens two connections, one with MARS enabled and one with MARS disabled, the two connections are in separate pools. For more information, see SQL Server Connection Pooling (ADO.NET).

SQL Server Batch Execution Environment

When a connection is opened, a default environment is defined. This environment is then copied into a logical MARS session.

The batch execution environment includes the following components:

  • Set options (for example, ANSI_NULLS, DATE_FORMAT, LANGUAGE, TEXTSIZE)

  • Security context (user/application role)

  • Database context (current database)

  • Execution state variables (for example, @@ERROR, @@ROWCOUNT, @@FETCH_STATUS @@IDENTITY)

  • Top-level temporary tables

With MARS, a default execution environment is associated to a connection. Every new batch that starts executing under a given connection receives a copy of the default environment. Whenever code is executed under a given batch, all changes made to the environment are scoped to the specific batch. Once execution finishes, the execution settings are copied into the default environment. In the case of a single batch issuing several commands to be executed sequentially under the same transaction, semantics are the same as those exposed by connections involving earlier clients or servers.

Parallel Execution

MARS is not designed to remove all requirements for multiple connections in an application. If an application needs true parallel execution of commands against a server, multiple connections should be used.

For example, consider the following scenario. Two command objects are created, one for processing a result set and another for updating data; they share a common connection via MARS. In this scenario, the Transaction.Commit fails on the update until all the results have been read on the first command object, yielding the following exception:

Message: Transaction context in use by another session.

Source: .NET SqlClient Data Provider

Expected: (null)

Received: System.Data.SqlClient.SqlException

There are three options for handling this scenario:

  1. Start the transaction after the reader is created, so that it is not part of the transaction. Every update then becomes its own transaction.

  2. Commit all work after the reader is closed. This has the potential for a substantial batch of updates.

  3. Don't use MARS; instead use a separate connection for each command object as you would have before MARS.

Detecting MARS Support

An application can check for MARS support by reading the SqlConnection.ServerVersion value. The major number should be 9 for SQL Server 2005 and 10 for SQL Server 2008.

See also

Before the introduction of Multiple Active Result Sets (MARS), developers had to use either multiple connections or server-side cursors to solve certain scenarios. In addition, when multiple connections were used in a transactional situation, bound connections (with sp_getbindtoken and sp_bindsession) were required. The following scenarios show how to use a MARS-enabled connection instead of multiple connections.

Using Multiple Commands with MARS

The following Console application demonstrates how to use two SqlDataReader objects with two SqlCommand objects and a single SqlConnection object with MARS enabled.

Example

The example opens a single connection to the AdventureWorks database. Using a SqlCommand object, a SqlDataReader is created. As the reader is used, a second SqlDataReader is opened, using data from the first SqlDataReader as input to the WHERE clause for the second reader.

Note

The following example uses the sample AdventureWorks database included with SQL Server. The connection string provided in the sample code assumes that the database is installed and available on the local computer. Modify the connection string as necessary for your environment.

C#
using System;  
using System.Data;  
using System.Data.SqlClient;  
  
class Class1  
{  
static void Main()  
{  
  // By default, MARS is disabled when connecting  
  // to a MARS-enabled host.  
  // It must be enabled in the connection string.  
  string connectionString = GetConnectionString();  
  
  int vendorID;  
  SqlDataReader productReader = null;  
  string vendorSQL =   
    "SELECT VendorId, Name FROM Purchasing.Vendor";  
  string productSQL =   
    "SELECT Production.Product.Name FROM Production.Product " +  
    "INNER JOIN Purchasing.ProductVendor " +  
    "ON Production.Product.ProductID = " +   
    "Purchasing.ProductVendor.ProductID " +  
    "WHERE Purchasing.ProductVendor.VendorID = @VendorId";  
  
  using (SqlConnection awConnection =   
    new SqlConnection(connectionString))  
  {  
    SqlCommand vendorCmd = new SqlCommand(vendorSQL, awConnection);  
    SqlCommand productCmd =   
      new SqlCommand(productSQL, awConnection);  
  
    productCmd.Parameters.Add("@VendorId", SqlDbType.Int);  
  
    awConnection.Open();  
    using (SqlDataReader vendorReader = vendorCmd.ExecuteReader())  
    {  
      while (vendorReader.Read())  
      {  
        Console.WriteLine(vendorReader["Name"]);  
  
        vendorID = (int)vendorReader["VendorId"];  
  
        productCmd.Parameters["@VendorId"].Value = vendorID;  
        // The following line of code requires  
        // a MARS-enabled connection.  
        productReader = productCmd.ExecuteReader();  
        using (productReader)  
        {  
          while (productReader.Read())  
          {  
            Console.WriteLine("  " +  
              productReader["Name"].ToString());  
          }  
        }  
      }  
  }  
      Console.WriteLine("Press any key to continue");  
      Console.ReadLine();  
    }  
  }  
  private static string GetConnectionString()  
  {  
    // To avoid storing the connection string in your code,  
    // you can retrieve it from a configuration file.  
    return "Data Source=(local);Integrated Security=SSPI;" +   
      "Initial Catalog=AdventureWorks;MultipleActiveResultSets=True";  
  }  
}  

Reading and Updating Data with MARS

MARS allows a connection to be used for both read operations and data manipulation language (DML) operations with more than one pending operation. This feature eliminates the need for an application to deal with connection-busy errors. In addition, MARS can replace the user of server-side cursors, which generally consume more resources. Finally, because multiple operations can operate on a single connection, they can share the same transaction context, eliminating the need to use sp_getbindtoken and sp_bindsession system stored procedures.

Example

The following Console application demonstrates how to use two SqlDataReader objects with three SqlCommand objects and a single SqlConnection object with MARS enabled. The first command object retrieves a list of vendors whose credit rating is 5. The second command object uses the vendor ID provided from a SqlDataReader to load the second SqlDataReader with all of the products for the particular vendor. Each product record is visited by the second SqlDataReader. A calculation is performed to determine what the new OnOrderQty should be. The third command object is then used to update the ProductVendor table with the new value. This entire process takes place within a single transaction, which is rolled back at the end.

Note

The following example uses the sample AdventureWorks database included with SQL Server. The connection string provided in the sample code assumes that the database is installed and available on the local computer. Modify the connection string as necessary for your environment.

C#
using System;  
using System.Collections.Generic;  
using System.Text;  
using System.Data;  
using System.Data.SqlClient;  
  
class Program  
{  
static void Main()  
{  
  // By default, MARS is disabled when connecting  
  // to a MARS-enabled host.  
  // It must be enabled in the connection string.  
  string connectionString = GetConnectionString();  
  
  SqlTransaction updateTx = null;  
  SqlCommand vendorCmd = null;  
  SqlCommand prodVendCmd = null;  
  SqlCommand updateCmd = null;  
  
  SqlDataReader prodVendReader = null;  
  
  int vendorID = 0;  
  int productID = 0;  
  int minOrderQty = 0;  
  int maxOrderQty = 0;  
  int onOrderQty = 0;  
  int recordsUpdated = 0;  
  int totalRecordsUpdated = 0;  
  
  string vendorSQL =  
      "SELECT VendorID, Name FROM Purchasing.Vendor " +   
      "WHERE CreditRating = 5";  
  string prodVendSQL =  
      "SELECT ProductID, MaxOrderQty, MinOrderQty, OnOrderQty " +  
      "FROM Purchasing.ProductVendor " +   
      "WHERE VendorID = @VendorID";  
  string updateSQL =  
      "UPDATE Purchasing.ProductVendor " +   
      "SET OnOrderQty = @OrderQty " +  
      "WHERE ProductID = @ProductID AND VendorID = @VendorID";  
  
  using (SqlConnection awConnection =   
    new SqlConnection(connectionString))  
  {  
    awConnection.Open();  
    updateTx = awConnection.BeginTransaction();  
  
    vendorCmd = new SqlCommand(vendorSQL, awConnection);  
    vendorCmd.Transaction = updateTx;  
  
    prodVendCmd = new SqlCommand(prodVendSQL, awConnection);  
    prodVendCmd.Transaction = updateTx;  
    prodVendCmd.Parameters.Add("@VendorId", SqlDbType.Int);  
  
    updateCmd = new SqlCommand(updateSQL, awConnection);  
    updateCmd.Transaction = updateTx;  
    updateCmd.Parameters.Add("@OrderQty", SqlDbType.Int);  
    updateCmd.Parameters.Add("@ProductID", SqlDbType.Int);  
    updateCmd.Parameters.Add("@VendorID", SqlDbType.Int);  
  
    using (SqlDataReader vendorReader = vendorCmd.ExecuteReader())  
    {  
      while (vendorReader.Read())  
      {  
        Console.WriteLine(vendorReader["Name"]);  
  
        vendorID = (int) vendorReader["VendorID"];  
        prodVendCmd.Parameters["@VendorID"].Value = vendorID;  
        prodVendReader = prodVendCmd.ExecuteReader();  
  
        using (prodVendReader)  
        {  
          while (prodVendReader.Read())  
          {  
            productID = (int) prodVendReader["ProductID"];  
  
            if (prodVendReader["OnOrderQty"] == DBNull.Value)  
            {  
              minOrderQty = (int) prodVendReader["MinOrderQty"];  
              onOrderQty = minOrderQty;  
            }  
            else  
            {  
              maxOrderQty = (int) prodVendReader["MaxOrderQty"];  
              onOrderQty = (int)(maxOrderQty / 2);  
            }  
  
            updateCmd.Parameters["@OrderQty"].Value = onOrderQty;  
            updateCmd.Parameters["@ProductID"].Value = productID;  
            updateCmd.Parameters["@VendorID"].Value = vendorID;  
  
            recordsUpdated = updateCmd.ExecuteNonQuery();  
            totalRecordsUpdated += recordsUpdated;  
          }  
        }  
      }  
    }  
    Console.WriteLine("Total Records Updated: " +   
      totalRecordsUpdated.ToString());  
    updateTx.Rollback();  
    Console.WriteLine("Transaction Rolled Back");  
  }  
  
  Console.WriteLine("Press any key to continue");  
  Console.ReadLine();  
}  
private static string GetConnectionString()  
{  
  // To avoid storing the connection string in your code,  
  // you can retrieve it from a configuration file.  
  return "Data Source=(local);Integrated Security=SSPI;" +   
    "Initial Catalog=AdventureWorks;" +   
    "MultipleActiveResultSets=True";  
  }  
}  

See also

Asynchronous Operations

Some database operations, such as command executions, can take significant time to complete. In such a case, single-threaded applications must block other operations and wait for the command to finish before they can continue their own operations. In contrast, being able to assign the long-running operation to a background thread allows the foreground thread to remain active throughout the operation. In a Windows application, for example, delegating the long-running operation to a background thread allows the user interface thread to remain responsive while the operation is executing.

The .NET Framework provides several standard asynchronous design patterns that developers can use to take advantage of background threads and free the user interface or high-priority threads to complete other operations. ADO.NET supports these same design patterns in its SqlCommand class. Specifically, the BeginExecuteNonQuery, BeginExecuteReader, and BeginExecuteXmlReader methods, paired with the EndExecuteNonQuery, EndExecuteReader, and EndExecuteXmlReader methods, provide the asynchronous support.

Note

Asynchronous programming is a core feature of the .NET Framework, and ADO.NET takes full advantage of the standard design patterns. For more information about the different asynchronous techniques available to developers, see Calling Synchronous Methods Asynchronously.

Although using asynchronous techniques with ADO.NET features does not add any special considerations, it is likely that more developers will use asynchronous features in ADO.NET than in other areas of the .NET Framework. It is important to be aware of the benefits and pitfalls of creating multithreaded applications. The examples that follow in this section point out several important issues that developers will need to take into account when building applications that incorporate multithreaded functionality.

In This Section

Windows Applications Using Callbacks
Provides an example demonstrating how to execute an asynchronous command safely, correctly handling interaction with a form and its contents from a separate thread.

ASP.NET Applications Using Wait Handles
Provides an example demonstrating how to execute multiple concurrent commands from an ASP.NET page, using Wait handles to manage the operation at completion of all the commands.

Polling in Console Applications
Provides an example demonstrating the use of polling to wait for the completion of an asynchronous command execution from a console application. This technique is also valid in a class library or other application without a user interface.

See also

Windows Applications Using Callbacks

In most asynchronous processing scenarios, you want to start a database operation and continue running other processes without waiting for the database operation to complete. However, many scenarios require doing something once the database operation has ended. In a Windows application, for example, you may want to delegate the long-running operation to a background thread while allowing the user interface thread to remain responsive. However, when the database operation is complete, you want to use the results to populate the form. This type of scenario is best implemented with a callback.

You define a callback by specifying an AsyncCallback delegate in the BeginExecuteNonQuery, BeginExecuteReader, or BeginExecuteXmlReader method. The delegate is called when the operation is complete. You can pass the delegate a reference to the SqlCommand itself, making it easy to access the SqlCommand object and call the appropriate End method without having to use a global variable.

Example

The following Windows application demonstrates the use of the BeginExecuteNonQuery method, executing a Transact-SQL statement that includes a delay of a few seconds (emulating a long-running command).

This example demonstrates a number of important techniques, including calling a method that interacts with the form from a separate thread. In addition, this example demonstrates how you must block users from concurrently executing a command multiple times, and how you must ensure that the form does not close before the callback procedure is called.

To set up this example, create a new Windows application. Place a Button control and two Label controls on the form (accepting the default name for each control). Add the following code to the form's class, modifying the connection string as necessary for your environment.

C#
// Add these to the top of the class, if they're not already there:  
using System;  
using System.Data;  
using System.Data.SqlClient;  
  
// Hook up the form's Load event handler (you can double-click on   
// the form's design surface in Visual Studio), and then add   
// this code to the form's class:  
  
// You'll need this delegate in order to display text from a thread  
// other than the form's thread. See the HandleCallback  
// procedure for more information.  
// This same delegate matches both the DisplayStatus   
// and DisplayResults methods.  
private delegate void DisplayInfoDelegate(string Text);  
  
// This flag ensures that the user doesn't attempt  
// to restart the command or close the form while the   
// asynchronous command is executing.  
private bool isExecuting;  
  
// This example maintains the connection object   
// externally, so that it's available for closing.  
private SqlConnection connection;  
  
private static string GetConnectionString()  
{  
    // To avoid storing the connection string in your code,              
    // you can retrieve it from a configuration file.   
  
    // If you have not included "Asynchronous Processing=true" in the  
    // connection string, the command will not be able  
    // to execute asynchronously.  
    return "Data Source=(local);Integrated Security=SSPI;" +  
    "Initial Catalog=AdventureWorks; Asynchronous Processing=true";  
}  
  
private void DisplayStatus(string Text)  
{  
    this.label1.Text = Text;  
}  
  
private void DisplayResults(string Text)  
{  
    this.label1.Text = Text;  
    DisplayStatus("Ready");  
}  
  
private void Form1_FormClosing(object sender, System.Windows.Forms.FormClosingEventArgs e)  
{  
    if (isExecuting)  
    {  
        MessageBox.Show(this, "Can't close the form until " +  
        "the pending asynchronous command has completed. Please " +  
        "wait...");
        e.Cancel = true;  
    }  
}  
  
private void button1_Click(object sender, System.EventArgs e)  
{  
    if (isExecuting)  
    {  
        MessageBox.Show(this, "Already executing. Please wait until " +  
        "the current query has completed.");  
    }  
    else  
    {  
        SqlCommand command = null;  
        try  
        {  
            DisplayResults("");  
            DisplayStatus("Connecting...");  
            connection = new SqlConnection(GetConnectionString());  
            // To emulate a long-running query, wait for   
            // a few seconds before working with the data.  
            // This command doesn't do much, but that's the point--  
            // it doesn't change your data, in the long run.  
            string commandText =  
                "WAITFOR DELAY '0:0:05';" +  
                "UPDATE Production.Product " +  
                "SET ReorderPoint = ReorderPoint + 1 " +  
                "WHERE ReorderPoint Is Not Null;" +  
                "UPDATE Production.Product " +  
                "SET ReorderPoint = ReorderPoint - 1 " +  
                "WHERE ReorderPoint Is Not Null";  
  
            command = new SqlCommand(commandText, connection);  
            connection.Open();  
  
            DisplayStatus("Executing...");  
            isExecuting = true;  
            // Although it's not required that you pass the   
            // SqlCommand object as the second parameter in the   
            // BeginExecuteNonQuery call, doing so makes it easier  
            // to call EndExecuteNonQuery in the callback procedure.  
            AsyncCallback callback = new AsyncCallback(HandleCallback);  
  
            // Once the BeginExecuteNonQuery method is called,  
            // the code continues--and the user can interact with  
            // the form--while the server executes the query.  
            command.BeginExecuteNonQuery(callback, command);  
  
        }  
        catch (Exception ex)  
        {  
            isExecuting = false;  
            DisplayStatus($"Ready (last error: {ex.Message})");
            if (connection != null)  
            {  
                connection.Close();  
            }  
        }  
    }  
}  
  
private void HandleCallback(IAsyncResult result)  
{  
    try  
    {  
        // Retrieve the original command object, passed  
        // to this procedure in the AsyncState property  
        // of the IAsyncResult parameter.  
        SqlCommand command = (SqlCommand)result.AsyncState;  
        int rowCount = command.EndExecuteNonQuery(result);  
        string rowText = " rows affected.";  
        if (rowCount == 1)  
        {  
            rowText = " row affected.";  
        }  
        rowText = rowCount + rowText;  
  
        // You may not interact with the form and its contents  
        // from a different thread, and this callback procedure  
        // is all but guaranteed to be running from a different thread  
        // than the form. Therefore you cannot simply call code that   
        // displays the results, like this:  
        // DisplayResults(rowText)  
  
        // Instead, you must call the procedure from the form's thread.  
        // One simple way to accomplish this is to call the Invoke  
        // method of the form, which calls the delegate you supply  
        // from the form's thread.   
        DisplayInfoDelegate del =   
         new DisplayInfoDelegate(DisplayResults);  
        this.Invoke(del, rowText);  
    }  
    catch (Exception ex)  
    {  
        // Because you're now running code in a separate thread,   
        // if you don't handle the exception here, none of your other  
        // code will catch the exception. Because none of your  
        // code is on the call stack in this thread, there's nothing  
        // higher up the stack to catch the exception if you don't   
        // handle it here. You can either log the exception or   
        // invoke a delegate (as in the non-error case in this   
        // example) to display the error on the form. In no case  
        // can you simply display the error without executing a   
        // delegate as in the try block here.   
  
        // You can create the delegate instance as you   
        // invoke it, like this:  
        this.Invoke(new DisplayInfoDelegate(DisplayStatus),  
            $"Ready (last error: {ex.Message}");
    }  
    finally  
    {  
        isExecuting = false;  
        if (connection != null)  
        {  
            connection.Close();  
        }  
    }  
}  
  
private void Form1_Load(object sender, System.EventArgs e)  
{  
    this.button1.Click += new System.EventHandler(this.button1_Click);  
    this.FormClosing += new System.Windows.Forms.  
        FormClosingEventHandler(this.Form1_FormClosing);  
}  

See also

ASP.NET Applications Using Wait Handles

The callback and polling models for handling asynchronous operations are useful when your application is processing only one asynchronous operation at a time. The Wait models provide a more flexible way of processing multiple asynchronous operations. There are two Wait models, named for the WaitHandle methods used to implement them: the Wait (Any) model and the Wait (All) model.

To use either Wait model, you need to use the AsyncWaitHandle property of the IAsyncResult object returned by the BeginExecuteNonQuery, BeginExecuteReader, or BeginExecuteXmlReader methods. The WaitAny and WaitAll methods both require you to send the WaitHandle objects as an argument, grouped together in an array.

Both Wait methods monitor the asynchronous operations, waiting for completion. The WaitAny method waits for any of the operations to complete or time out. Once you know a particular operation is complete, you can process its results and then continue waiting for the next operation to complete or time out. The WaitAll method waits for all of the processes in the array of WaitHandle instances to complete or time out before continuing.

The Wait models' benefit is most striking when you need to run multiple operations of some length on different servers, or when your server is powerful enough to process all the queries at the same time. In the examples presented here, three queries emulate long processes by adding WAITFOR commands of varying lengths to inconsequential SELECT queries.

Example: Wait (Any) Model

The following example illustrates the Wait (Any) model. Once three asynchronous processes are started, the WaitAny method is called to wait for the completion of any one of them. As each process completes, the EndExecuteReader method is called and the resulting SqlDataReader object is read. At this point, a real-world application would likely use the SqlDataReader to populate a portion of the page. In this simple example, the time the process completed is added to a text box corresponding to the process. Taken together, the times in the text boxes illustrate the point: Code is executed each time a process completes.

To set up this example, create a new ASP.NET Web Site project. Place a Button control and four TextBox controls on the page (accepting the default name for each control).

Add the following code to the form's class, modifying the connection string as necessary for your environment.

C#
// Add the following using statements, if they are not already there.  
using System;  
using System.Data;  
using System.Configuration;  
using System.Web;  
using System.Web.Security;  
using System.Web.UI;  
using System.Web.UI.WebControls;  
using System.Web.UI.WebControls.WebParts;  
using System.Web.UI.HtmlControls;  
using System.Threading;  
using System.Data.SqlClient;  
  
// Add this code to the page's class  
string GetConnectionString()  
     //  To avoid storing the connection string in your code,              
     //  you can retrieve it from a configuration file.   
     //  If you have not included "Asynchronous Processing=true"   
     //  in the connection string, the command will not be able  
     //  to execute asynchronously.  
{  
     return "Data Source=(local);Integrated Security=SSPI;" +  
          "Initial Catalog=AdventureWorks;" +  
          "Asynchronous Processing=true";  
}  
void Button1_Click(object sender, System.EventArgs e)  
{  
     //  In a real-world application, you might be connecting to   
     //   three different servers or databases. For the example,  
     //   we connect to only one.  
  
     SqlConnection connection1 =   
          new SqlConnection(GetConnectionString());  
     SqlConnection connection2 =   
          new SqlConnection(GetConnectionString());  
     SqlConnection connection3 =   
          new SqlConnection(GetConnectionString());  
     //  To keep the example simple, all three asynchronous   
     //  processes select a row from the same table. WAITFOR  
     //  commands are used to emulate long-running processes  
     //  that complete after different periods of time.  
  
     string commandText1 = "WAITFOR DELAY '0:0:01';" +   
          "SELECT * FROM Production.Product " +   
          "WHERE ProductNumber = 'BL-2036'";  
     string commandText2 = "WAITFOR DELAY '0:0:05';" +   
          "SELECT * FROM Production.Product " +   
          "WHERE ProductNumber = 'BL-2036'";  
     string commandText3 = "WAITFOR DELAY '0:0:10';" +   
          "SELECT * FROM Production.Product " +   
          "WHERE ProductNumber = 'BL-2036'";  
     try  
          //  For each process, open a connection and begin   
          //  execution. Use the IAsyncResult object returned by   
          //  BeginExecuteReader to add a WaitHandle for the   
          //  process to the array.  
     {  
          connection1.Open();  
          SqlCommand command1 =  
               new SqlCommand(commandText1, connection1);  
          IAsyncResult result1 = command1.BeginExecuteReader();  
          WaitHandle waitHandle1 = result1.AsyncWaitHandle;  
  
          connection2.Open();  
          SqlCommand command2 =  
               new SqlCommand(commandText2, connection2);  
          IAsyncResult result2 = command2.BeginExecuteReader();  
          WaitHandle waitHandle2 = result2.AsyncWaitHandle;  
  
          connection3.Open();  
          SqlCommand command3 =  
               new SqlCommand(commandText3, connection3);  
          IAsyncResult result3 = command3.BeginExecuteReader();  
          WaitHandle waitHandle3 = result3.AsyncWaitHandle;  
  
          WaitHandle[] waitHandles = {  
               waitHandle1, waitHandle2, waitHandle3  
          };  
  
          int index;  
          for (int countWaits = 0; countWaits <= 2; countWaits++)  
          {  
               //  WaitAny waits for any of the processes to   
               //  complete. The return value is either the index   
               //  of the array element whose process just   
               //  completed, or the WaitTimeout value.  
  
               index = WaitHandle.WaitAny(waitHandles,   
                    60000, false);  
               //  This example doesn't actually do anything with   
               //  the data returned by the processes, but the   
               //  code opens readers for each just to demonstrate       
               //  the concept.  
               //  Instead of using the returned data to fill the   
               //  controls on the page, the example adds the time  
               //  the process was completed to the corresponding  
               //  text box.  
  
               switch (index)  
               {  
                    case 0:  
                         SqlDataReader reader1;  
                         reader1 =   
                              command1.EndExecuteReader(result1);  
                         if (reader1.Read())  
                         {  
                           TextBox1.Text =   
                           "Completed " +  
                           System.DateTime.Now.ToLongTimeString();  
                         }  
                         reader1.Close();  
                         break;  
                    case 1:  
                         SqlDataReader reader2;  
                         reader2 =   
                              command2.EndExecuteReader(result2);  
                         if (reader2.Read())  
                         {  
                           TextBox2.Text =   
                           "Completed " +  
                           System.DateTime.Now.ToLongTimeString();  
                         }  
                         reader2.Close();  
                         break;  
                    case 2:  
                         SqlDataReader reader3;  
                         reader3 =   
                              command3.EndExecuteReader(result3);  
                         if (reader3.Read())  
                         {  
                           TextBox3.Text =   
                           "Completed " +  
                           System.DateTime.Now.ToLongTimeString();  
                         }  
                         reader3.Close();  
                         break;  
                    case WaitHandle.WaitTimeout:  
                         throw new Exception("Timeout");  
                         break;  
               }  
          }  
     }  
     catch (Exception ex)  
     {  
          TextBox4.Text = ex.ToString();  
     }  
     connection1.Close();  
     connection2.Close();  
     connection3.Close();  
}  

Example: Wait (All) Model

The following example illustrates the Wait (All) model. Once three asynchronous processes are started, the WaitAll method is called to wait for the processes to complete or time out.

Like the example of the Wait (Any) model, the time the process completed is added to a text box corresponding to the process. Again, the times in the text boxes illustrate the point: Code following the WaitAny method is executed only after all processes are complete.

To set up this example, create a new ASP.NET Web Site project. Place a Button control and four TextBox controls on the page (accepting the default name for each control).

Add the following code to the form's class, modifying the connection string as necessary for your environment.

C#
// Add the following using statements, if they are not already there.  
using System;  
using System.Data;  
using System.Configuration;  
using System.Web;  
using System.Web.Security;  
using System.Web.UI;  
using System.Web.UI.WebControls;  
using System.Web.UI.WebControls.WebParts;  
using System.Web.UI.HtmlControls;  
using System.Threading;  
using System.Data.SqlClient;  
  
// Add this code to the page's class  
string GetConnectionString()  
    //  To avoid storing the connection string in your code,              
    //  you can retrieve it from a configuration file.   
    //  If you have not included "Asynchronous Processing=true"   
    //  in the connection string, the command will not be able  
    //  to execute asynchronously.  
{  
    return "Data Source=(local);Integrated Security=SSPI;" +  
        "Initial Catalog=AdventureWorks;" +  
        "Asynchronous Processing=true";  
}  
void Button1_Click(object sender, System.EventArgs e)  
{  
    //  In a real-world application, you might be connecting to   
    //   three different servers or databases. For the example,  
    //   we connect to only one.  
  
    SqlConnection connection1 =   
        new SqlConnection(GetConnectionString());  
    SqlConnection connection2 =   
        new SqlConnection(GetConnectionString());  
    SqlConnection connection3 =   
        new SqlConnection(GetConnectionString());  
    //  To keep the example simple, all three asynchronous   
    //  processes execute UPDATE queries that result in  
      //  no change to the data. WAITFOR  
    //  commands are used to emulate long-running processes  
    //  that complete after different periods of time.  
  
    string commandText1 =   
        "UPDATE Production.Product " +  
        "SET ReorderPoint = ReorderPoint + 1 " +  
        "WHERE ReorderPoint Is Not Null;" +  
        "WAITFOR DELAY '0:0:01';" +  
        "UPDATE Production.Product " +  
        "SET ReorderPoint = ReorderPoint - 1 " +  
        "WHERE ReorderPoint Is Not Null";  
  
    string commandText2 =   
      "UPDATE Production.Product " +  
      "SET ReorderPoint = ReorderPoint + 1 " +  
      "WHERE ReorderPoint Is Not Null;" +  
      "WAITFOR DELAY '0:0:05';" +  
      "UPDATE Production.Product " +  
      "SET ReorderPoint = ReorderPoint - 1 " +  
      "WHERE ReorderPoint Is Not Null";  
  
    string commandText3 =  
       "UPDATE Production.Product " +  
       "SET ReorderPoint = ReorderPoint + 1 " +  
       "WHERE ReorderPoint Is Not Null;" +  
       "WAITFOR DELAY '0:0:10';" +  
       "UPDATE Production.Product " +  
       "SET ReorderPoint = ReorderPoint - 1 " +  
       "WHERE ReorderPoint Is Not Null";  
    try  
        //  For each process, open a connection and begin   
        //  execution. Use the IAsyncResult object returned by   
        //  BeginExecuteReader to add a WaitHandle for the   
        //  process to the array.  
    {  
        connection1.Open();  
        SqlCommand command1 =  
            new SqlCommand(commandText1, connection1);  
        IAsyncResult result1 = command1.BeginExecuteNonQuery();  
        WaitHandle waitHandle1 = result1.AsyncWaitHandle;  
        connection2.Open();  
  
        SqlCommand command2 =  
            new SqlCommand(commandText2, connection2);  
        IAsyncResult result2 = command2.BeginExecuteNonQuery();  
        WaitHandle waitHandle2 = result2.AsyncWaitHandle;  
        connection3.Open();  
  
        SqlCommand command3 =  
            new SqlCommand(commandText3, connection3);  
        IAsyncResult result3 = command3.BeginExecuteNonQuery();  
        WaitHandle waitHandle3 = result3.AsyncWaitHandle;  
  
        WaitHandle[] waitHandles = {  
            waitHandle1, waitHandle2, waitHandle3  
        };  
  
        bool result;  
        //  WaitAll waits for all of the processes to   
        //  complete. The return value is True if the processes  
        //  all completed successfully, False if any process  
        //  timed out.  
  
        result = WaitHandle.WaitAll(waitHandles, 60000, false);  
        if(result)  
        {  
            long rowCount1 =   
                command1.EndExecuteNonQuery(result1);  
            TextBox1.Text = "Completed " +  
                System.DateTime.Now.ToLongTimeString();  
            long rowCount2 =   
                command2.EndExecuteNonQuery(result2);  
            TextBox2.Text = "Completed " +  
                System.DateTime.Now.ToLongTimeString();  
  
            long rowCount3 =   
                command3.EndExecuteNonQuery(result3);  
            TextBox3.Text = "Completed " +  
                System.DateTime.Now.ToLongTimeString();  
        }  
        else  
        {  
            throw new Exception("Timeout");  
        }  
    }  
  
    catch (Exception ex)  
    {  
        TextBox4.Text = ex.ToString();  
    }  
    connection1.Close();  
    connection2.Close();  
    connection3.Close();  
}  

See also

Polling in Console Applications

Asynchronous operations in ADO.NET allow you to initiate time-consuming database operations on one thread while performing other tasks on another thread. In most scenarios, however, you will eventually reach a point where your application should not continue until the database operation is complete. For such cases, it is useful to poll the asynchronous operation to determine whether the operation has completed or not.

You can use the IsCompleted property to find out whether or not the operation has completed.

Example

The following console application updates data within the AdventureWorks sample database, doing its work asynchronously. In order to emulate a long-running process, this example inserts a WAITFOR statement in the command text. Normally, you would not try to make your commands run slower, but doing so in this case makes it easier to demonstrate asynchronous behavior.

C#
using System;  
using System.Data;  
using System.Data.SqlClient;  
  
class Class1  
{  
    [STAThread]  
    static void Main()  
    {  
        // The WAITFOR statement simply adds enough time to   
        // prove the asynchronous nature of the command.  
  
        string commandText =  
          "UPDATE Production.Product SET ReorderPoint = " +  
          "ReorderPoint + 1 " +  
          "WHERE ReorderPoint Is Not Null;" +  
          "WAITFOR DELAY '0:0:3';" +  
          "UPDATE Production.Product SET ReorderPoint = " +  
          "ReorderPoint - 1 " +  
          "WHERE ReorderPoint Is Not Null";  
  
        RunCommandAsynchronously(  
            commandText, GetConnectionString());  
  
        Console.WriteLine("Press Enter to continue.");  
        Console.ReadLine();  
    }  
  
    private static void RunCommandAsynchronously(  
      string commandText, string connectionString)  
    {  
        // Given command text and connection string, asynchronously  
        // execute the specified command against the connection.   
        // For this example, the code displays an indicator as it's   
        // working, verifying the asynchronous behavior.   
        using (SqlConnection connection =  
          new SqlConnection(connectionString))  
        {  
            try  
            {  
                int count = 0;  
                SqlCommand command =   
                    new SqlCommand(commandText, connection);  
                connection.Open();  
  
                IAsyncResult result =   
                    command.BeginExecuteNonQuery();  
                while (!result.IsCompleted)  
                {  
                    Console.WriteLine(  
                                    "Waiting ({0})", count++);  
                    // Wait for 1/10 second, so the counter  
                    // doesn't consume all available   
                    // resources on the main thread.  
                    System.Threading.Thread.Sleep(100);  
                }  
                Console.WriteLine(  
                    "Command complete. Affected {0} rows.",  
                command.EndExecuteNonQuery(result));  
            }  
            catch (SqlException ex)  
            {  
                Console.WriteLine("Error ({0}): {1}",   
                    ex.Number, ex.Message);  
            }  
            catch (InvalidOperationException ex)  
            {  
                Console.WriteLine("Error: {0}", ex.Message);  
            }  
            catch (Exception ex)  
            {  
                // You might want to pass these errors  
                // back out to the caller.  
                Console.WriteLine("Error: {0}", ex.Message);  
            }  
        }  
    }  
  
    private static string GetConnectionString()  
    {  
        // To avoid storing the connection string in your code,              
        // you can retrieve it from a configuration file.   
  
        // If you have not included "Asynchronous Processing=true"  
        // in the connection string, the command will not be able  
        // to execute asynchronously.  
        return "Data Source=(local);Integrated Security=SSPI;" +  
        "Initial Catalog=AdventureWorks; " +   
        "Asynchronous Processing=true";  
    }  
}  

See also

Table-Valued Parameters

Table-valued parameters provide an easy way to marshal multiple rows of data from a client application to SQL Server without requiring multiple round trips or special server-side logic for processing the data. You can use table-valued parameters to encapsulate rows of data in a client application and send the data to the server in a single parameterized command. The incoming data rows are stored in a table variable that can then be operated on by using Transact-SQL.

Column values in table-valued parameters can be accessed using standard Transact-SQL SELECT statements. Table-valued parameters are strongly typed and their structure is automatically validated. The size of table-valued parameters is limited only by server memory.

Note

You cannot return data in a table-valued parameter. Table-valued parameters are input-only; the OUTPUT keyword is not supported.

For more information about table-valued parameters, see the following resources.

Resource Description
Table-Valued Parameters (Database Engine) in SQL Server Books Online Describes how to create and use table-valued parameters.
User-Defined Table Types in SQL Server Books Online Describes user-defined table types that are used to declare table-valued parameters.

Passing Multiple Rows in Previous Versions of SQL Server

Before table-valued parameters were introduced to SQL Server 2008, the options for passing multiple rows of data to a stored procedure or a parameterized SQL command were limited. A developer could choose from the following options for passing multiple rows to the server:

  • Use a series of individual parameters to represent the values in multiple columns and rows of data. The amount of data that can be passed by using this method is limited by the number of parameters allowed. SQL Server procedures can have, at most, 2100 parameters. Server-side logic is required to assemble these individual values into a table variable or a temporary table for processing.

  • Bundle multiple data values into delimited strings or XML documents and then pass those text values to a procedure or statement. This requires the procedure or statement to include the logic necessary for validating the data structures and unbundling the values.

  • Create a series of individual SQL statements for data modifications that affect multiple rows, such as those created by calling the Update method of a SqlDataAdapter. Changes can be submitted to the server individually or batched into groups. However, even when submitted in batches that contain multiple statements, each statement is executed separately on the server.

  • Use the bcp utility program or the SqlBulkCopy object to load many rows of data into a table. Although this technique is very efficient, it does not support server-side processing unless the data is loaded into a temporary table or table variable.

Creating Table-Valued Parameter Types

Table-valued parameters are based on strongly-typed table structures that are defined by using Transact-SQL CREATE TYPE statements. You have to create a table type and define the structure in SQL Server before you can use table-valued parameters in your client applications. For more information about creating table types, see User-Defined Table Types in SQL Server Books Online.

The following statement creates a table type named CategoryTableType that consists of CategoryID and CategoryName columns:

CREATE TYPE dbo.CategoryTableType AS TABLE  
    ( CategoryID int, CategoryName nvarchar(50) )  

After you create a table type, you can declare table-valued parameters based on that type. The following Transact-SQL fragment demonstrates how to declare a table-valued parameter in a stored procedure definition. Note that the READONLY keyword is required for declaring a table-valued parameter.

CREATE PROCEDURE usp_UpdateCategories   
    (@tvpNewCategories dbo.CategoryTableType READONLY)  

Modifying Data with Table-Valued Parameters (Transact-SQL)

Table-valued parameters can be used in set-based data modifications that affect multiple rows by executing a single statement. For example, you can select all the rows in a table-valued parameter and insert them into a database table, or you can create an update statement by joining a table-valued parameter to the table you want to update.

The following Transact-SQL UPDATE statement demonstrates how to use a table-valued parameter by joining it to the Categories table. When you use a table-valued parameter with a JOIN in a FROM clause, you must also alias it, as shown here, where the table-valued parameter is aliased as "ec":

UPDATE dbo.Categories  
    SET Categories.CategoryName = ec.CategoryName  
    FROM dbo.Categories INNER JOIN @tvpEditedCategories AS ec  
    ON dbo.Categories.CategoryID = ec.CategoryID;  

This Transact-SQL example demonstrates how to select rows from a table-valued parameter to perform an INSERT in a single set-based operation.

INSERT INTO dbo.Categories (CategoryID, CategoryName)  
    SELECT nc.CategoryID, nc.CategoryName FROM @tvpNewCategories AS nc;  

Limitations of Table-Valued Parameters

There are several limitations to table-valued parameters:

  • You cannot pass table-valued parameters to CLR user-defined functions.

  • Table-valued parameters can only be indexed to support UNIQUE or PRIMARY KEY constraints. SQL Server does not maintain statistics on table-valued parameters.

  • Table-valued parameters are read-only in Transact-SQL code. You cannot update the column values in the rows of a table-valued parameter and you cannot insert or delete rows. To modify the data that is passed to a stored procedure or parameterized statement in table-valued parameter, you must insert the data into a temporary table or into a table variable.

  • You cannot use ALTER TABLE statements to modify the design of table-valued parameters.

Configuring a SqlParameter Example

System.Data.SqlClient supports populating table-valued parameters from DataTable, DbDataReader or IEnumerable<T> \ SqlDataRecord objects. You must specify a type name for the table-valued parameter by using the TypeName property of a SqlParameter. The TypeName must match the name of a compatible type previously created on the server. The following code fragment demonstrates how to configure SqlParameter to insert data.

In the following example, the addedCategories variable contains a DataTable. To see how the variable is populated, see the examples in the next section, Passing a Table-Valued Parameter to a Stored Procedure.

C#
// Configure the command and parameter.  
SqlCommand insertCommand = new SqlCommand(sqlInsert, connection);  
SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("@tvpNewCategories", addedCategories);  
tvpParam.SqlDbType = SqlDbType.Structured;  
tvpParam.TypeName = "dbo.CategoryTableType";  

You can also use any object derived from DbDataReader to stream rows of data to a table-valued parameter, as shown in this fragment:

C#
// Configure the SqlCommand and table-valued parameter.  
SqlCommand insertCommand = new SqlCommand("usp_InsertCategories", connection);  
insertCommand.CommandType = CommandType.StoredProcedure;  
SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("@tvpNewCategories", dataReader);  
tvpParam.SqlDbType = SqlDbType.Structured;  

Passing a Table-Valued Parameter to a Stored Procedure

This example demonstrates how to pass table-valued parameter data to a stored procedure. The code extracts added rows into a new DataTable by using the GetChanges method. The code then defines a SqlCommand, setting the CommandType property to StoredProcedure. The SqlParameter is populated by using the AddWithValue method and the SqlDbType is set to Structured. The SqlCommand is then executed by using the ExecuteNonQuery method.

C#
// Assumes connection is an open SqlConnection object.  
using (connection)  
{  
  // Create a DataTable with the modified rows.  
  DataTable addedCategories = CategoriesDataTable.GetChanges(DataRowState.Added);  

  // Configure the SqlCommand and SqlParameter.  
  SqlCommand insertCommand = new SqlCommand("usp_InsertCategories", connection);  
  insertCommand.CommandType = CommandType.StoredProcedure;  
  SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("@tvpNewCategories", addedCategories);  
  tvpParam.SqlDbType = SqlDbType.Structured;  

  // Execute the command.  
  insertCommand.ExecuteNonQuery();  
}  

Passing a Table-Valued Parameter to a Parameterized SQL Statement

The following example demonstrates how to insert data into the dbo.Categories table by using an INSERT statement with a SELECT subquery that has a table-valued parameter as the data source. When passing a table-valued parameter to a parameterized SQL statement, you must specify a type name for the table-valued parameter by using the new TypeName property of a SqlParameter. This TypeName must match the name of a compatible type previously created on the server. The code in this example uses the TypeName property to reference the type structure defined in dbo.CategoryTableType.

Note

If you supply a value for an identity column in a table-valued parameter, you must issue the SET IDENTITY_INSERT statement for the session.

C#
// Assumes connection is an open SqlConnection.  
using (connection)  
{  
  // Create a DataTable with the modified rows.  
  DataTable addedCategories = CategoriesDataTable.GetChanges(DataRowState.Added);  

  // Define the INSERT-SELECT statement.  
  string sqlInsert =   
      "INSERT INTO dbo.Categories (CategoryID, CategoryName)"  
      + " SELECT nc.CategoryID, nc.CategoryName"  
      + " FROM @tvpNewCategories AS nc;"  

  // Configure the command and parameter.  
  SqlCommand insertCommand = new SqlCommand(sqlInsert, connection);  
  SqlParameter tvpParam = insertCommand.Parameters.AddWithValue("@tvpNewCategories", addedCategories);  
  tvpParam.SqlDbType = SqlDbType.Structured;  
  tvpParam.TypeName = "dbo.CategoryTableType";  

  // Execute the command.  
  insertCommand.ExecuteNonQuery();  
}  

Streaming Rows with a DataReader

You can also use any object derived from DbDataReader to stream rows of data to a table-valued parameter. The following code fragment demonstrates retrieving data from an Oracle database by using an OracleCommand and an OracleDataReader. The code then configures a SqlCommand to invoke a stored procedure with a single input parameter. The SqlDbType property of the SqlParameter is set to Structured. The AddWithValue passes the OracleDataReader result set to the stored procedure as a table-valued parameter.

C#
// Assumes connection is an open SqlConnection.  
// Retrieve data from Oracle.  
OracleCommand selectCommand = new OracleCommand(  
   "Select CategoryID, CategoryName FROM Categories;",  
   oracleConnection);  
OracleDataReader oracleReader = selectCommand.ExecuteReader(  
   CommandBehavior.CloseConnection);  
  
 // Configure the SqlCommand and table-valued parameter.  
 SqlCommand insertCommand = new SqlCommand(  
   "usp_InsertCategories", connection);  
 insertCommand.CommandType = CommandType.StoredProcedure;  
 SqlParameter tvpParam =  
    insertCommand.Parameters.AddWithValue(  
    "@tvpNewCategories", oracleReader);  
 tvpParam.SqlDbType = SqlDbType.Structured;  
  
 // Execute the command.  
 insertCommand.ExecuteNonQuery();  

See also

SQL Server Features and ADO.NET

The topics in this section discuss features in SQL Server that are targeted at developing database applications using ADO.NET.

For more information, see SQL Server Books Online for the version of SQL Server you are using, as listed in the following table.

SQL Server Books Online

  1. Development (Database Engine)

In This Section

Enumerating Instances of SQL Server (ADO.NET)
Describes how to enumerate active instances of SQL Server.

Provider Statistics for SQL Server
Describes support for obtaining SQL Server run-time statistics.

SQL Server Express User Instances
Describes support for SQL Server Express user instances.

Database Mirroring in SQL Server
Describes database mirroring functionality.

SQL Server Common Language Runtime Integration
Describes how data can be accessed from within a common language runtime (CLR) database object in SQL Server.

Query Notifications in SQL Server
Describes how .NET Framework applications can request notification from SQL Server when data has changed.

Snapshot Isolation in SQL Server
Describes support for snapshot isolation, a row versioning mechanism designed to reduce blocking in transactional applications.

SqlClient Support for High Availability, Disaster Recovery
Describes SqlClient support for high-availability, disaster recovery (AlwaysOn) availability groups.

SqlClient Support for LocalDB
Describes SqlClient support for LocalDB databases.

See also

Enumerating Instances of SQL Server (ADO.NET)

SQL Server permits applications to find SQL Server instances within the current network. The SqlDataSourceEnumerator class exposes this information to the application developer, providing a DataTable containing information about all the visible servers. This returned table contains a list of server instances available on the network that matches the list provided when a user attempts to create a new connection, and expands the drop-down list containing all the available servers on the Connection Properties dialog box. The results displayed are not always complete.

Note

As with most Windows services, it is best to run the SQL Browser service with the least possible privileges. See SQL Server Books Online for more information on the SQL Browser service, and how to manage its behavior.

Retrieving an Enumerator Instance

In order to retrieve the table containing information about the available SQL Server instances, you must first retrieve an enumerator, using the shared/static Instance property:

C#
System.Data.Sql.SqlDataSourceEnumerator instance =   
   System.Data.Sql.SqlDataSourceEnumerator.Instance  

Once you have retrieved the static instance, you can call the GetDataSources method, which returns a DataTable containing information about the available servers:

C#
System.Data.DataTable dataTable = instance.GetDataSources();  

The table returned from the method call contains the following columns, all of which contain string values:

Column Description
ServerName Name of the server.
InstanceName Name of the server instance. Blank if the server is running as the default instance.
IsClustered Indicates whether the server is part of a cluster.
Version Version of the server. For example:

- 9.00.x (SQL Server 2005)
- 10.0.xx (SQL Server 2008)
- 10.50.x (SQL Server 2008 R2)
- 11.0.xx (SQL Server 2012)

Enumeration Limitations

All of the available servers may or may not be listed. The list can vary depending on factors such as timeouts and network traffic. This can cause the list to be different on two consecutive calls. Only servers on the same network will be listed. Broadcast packets typically won't traverse routers, which is why you may not see a server listed, but it will be stable across calls.

Listed servers may or may not have additional information such as IsClustered and version. This is dependent on how the list was obtained. Servers listed through the SQL Server browser service will have more details than those found through the Windows infrastructure, which will list only the name.

Note

Server enumeration is only available when running in full-trust. Assemblies running in a partially-trusted environment will not be able to use it, even if they have the SqlClientPermission Code Access Security (CAS) permission.

SQL Server provides information for the SqlDataSourceEnumerator through the use of an external Windows service named SQL Browser. This service is enabled by default, but administrators may turn it off or disable it, making the server instance invisible to this class.

Example

The following console application retrieves information about all of the visible SQL Server instances and displays the information in the console window.

C#
using System.Data.Sql;  
  
class Program  
{  
  static void Main()  
  {  
    // Retrieve the enumerator instance and then the data.  
    SqlDataSourceEnumerator instance =  
      SqlDataSourceEnumerator.Instance;  
    System.Data.DataTable table = instance.GetDataSources();  
  
    // Display the contents of the table.  
    DisplayData(table);  
  
    Console.WriteLine("Press any key to continue.");  
    Console.ReadKey();  
  }  
  
  private static void DisplayData(System.Data.DataTable table)  
  {  
    foreach (System.Data.DataRow row in table.Rows)  
    {  
      foreach (System.Data.DataColumn col in table.Columns)  
      {  
        Console.WriteLine("{0} = {1}", col.ColumnName, row[col]);  
      }  
      Console.WriteLine("============================");  
    }  
  }  
}  

See also

Provider Statistics for SQL Server

Starting with the .NET Framework version 2.0, the .NET Framework Data Provider for SQL Server supports run-time statistics. You must enable statistics by setting the StatisticsEnabled property of the SqlConnection object to True after you have a valid connection object created. After statistics are enabled, you can review them as a "snapshot in time" by retrieving an IDictionary reference via the RetrieveStatistics method of the SqlConnection object. You enumerate through the list as a set of name/value pair dictionary entries. These name/value pairs are unordered. At any time, you can call the ResetStatistics method of the SqlConnection object to reset the counters. If statistic gathering has not been enabled, an exception is not generated. In addition, if RetrieveStatistics is called without StatisticsEnabled having been called first, the values retrieved are the initial values for each entry. If you enable statistics, run your application for a while, and then disable statistics, the values retrieved will reflect the values collected up to the point where statistics were disabled. All statistical values gathered are on a per-connection basis.

Statistical Values Available

Currently there are 18 different items available from the Microsoft SQL Server provider. The number of items available can be accessed via the Count property of the IDictionary interface reference returned by RetrieveStatistics. All of the counters for provider statistics use the common language runtime Int64 type (long in C# and Visual Basic), which is 64 bits wide. The maximum value of the int64 data type, as defined by the int64.MaxValue field, is ((2^63)-1)). When the values for the counters reach this maximum value, they should no longer be considered accurate. This means that int64.MaxValue-1((2^63)-2) is effectively the greatest valid value for any statistic.

Note

A dictionary is used for returning provider statistics because the number, names and order of the returned statistics may change in the future. Applications should not rely on a specific value being found in the dictionary, but should instead check whether the value is there and branch accordingly.

The following table describes the current statistical values available. Note that the key names for the individual values are not localized across regional versions of the Microsoft .NET Framework.

Key Name Description
BuffersReceived Returns the number of tabular data stream (TDS) packets received by the provider from SQL Server after the application has started using the provider and has enabled statistics.
BuffersSent Returns the number of TDS packets sent to SQL Server by the provider after statistics have been enabled. Large commands can require multiple buffers. For example, if a large command is sent to the server and it requires six packets, ServerRoundtrips is incremented by one and BuffersSent is incremented by six.
BytesReceived Returns the number of bytes of data in the TDS packets received by the provider from SQL Server once the application has started using the provider and has enabled statistics.
BytesSent Returns the number of bytes of data sent to SQL Server in TDS packets after the application has started using the provider and has enabled statistics.
ConnectionTime The amount of time (in milliseconds) that the connection has been opened after statistics have been enabled (total connection time if statistics were enabled before opening the connection).
CursorOpens Returns the number of times a cursor was open through the connection once the application has started using the provider and has enabled statistics.

Note that read-only/forward-only results returned by SELECT statements are not considered cursors and thus do not affect this counter.
ExecutionTime Returns the cumulative amount of time (in milliseconds) that the provider has spent processing once statistics have been enabled, including the time spent waiting for replies from the server as well as the time spent executing code in the provider itself.

The classes that include timing code are:

SqlConnection

SqlCommand

SqlDataReader

SqlDataAdapter

SqlTransaction

SqlCommandBuilder

To keep performance-critical members as small as possible, the following members are not timed:

SqlDataReader

this[] operator (all overloads)

GetBoolean

GetChar

GetDateTime

GetDecimal

GetDouble

GetFloat

GetGuid

GetInt16

GetInt32

GetInt64

GetName

GetOrdinal

GetSqlBinary

GetSqlBoolean

GetSqlByte

GetSqlDateTime

GetSqlDecimal

GetSqlDouble

GetSqlGuid

GetSqlInt16

GetSqlInt32

GetSqlInt64

GetSqlMoney

GetSqlSingle

GetSqlString

GetString

IsDBNull
IduCount Returns the total number of INSERT, DELETE, and UPDATE statements executed through the connection once the application has started using the provider and has enabled statistics.
IduRows Returns the total number of rows affected by INSERT, DELETE, and UPDATE statements executed through the connection once the application has started using the provider and has enabled statistics.
NetworkServerTime Returns the cumulative amount of time (in milliseconds) that the provider spent waiting for replies from the server once the application has started using the provider and has enabled statistics.
PreparedExecs Returns the number of prepared commands executed through the connection once the application has started using the provider and has enabled statistics.
Prepares Returns the number of statements prepared through the connection once the application has started using the provider and has enabled statistics.
SelectCount Returns the number of SELECT statements executed through the connection once the application has started using the provider and has enabled statistics. This includes FETCH statements to retrieve rows from cursors, and the count for SELECT statements is updated when the end of a SqlDataReader is reached.
SelectRows Returns the number of rows selected once the application has started using the provider and has enabled statistics. This counter reflects all the rows generated by SQL statements, even those that were not actually consumed by the caller. For example, closing a data reader before reading the entire result set would not affect the count. This includes the rows retrieved from cursors through FETCH statements.
ServerRoundtrips Returns the number of times the connection sent commands to the server and got a reply back once the application has started using the provider and has enabled statistics.
SumResultSets Returns the number of result sets that have been used once the application has started using the provider and has enabled statistics. For example this would include any result set returned to the client. For cursors, each fetch or block-fetch operation is considered an independent result set.
Transactions Returns the number of user transactions started once the application has started using the provider and has enabled statistics, including rollbacks. If a connection is running with auto commit on, each command is considered a transaction.

This counter increments the transaction count as soon as a BEGIN TRAN statement is executed, regardless of whether the transaction is committed or rolled back later.
UnpreparedExecs Returns the number of unprepared statements executed through the connection once the application has started using the provider and has enabled statistics.

Retrieving a Value

The following console application shows how to enable statistics on a connection, retrieve four individual statistic values, and write them out to the console window.

Note

The following example uses the sample AdventureWorks database included with SQL Server. The connection string provided in the sample code assumes the database is installed and available on the local computer. Modify the connection string as necessary for your environment.

C#
using System;  
using System.Collections;  
using System.Collections.Generic;  
using System.Data;  
using System.Data.SqlClient;  
  
namespace CS_Stats_Console_GetValue  
{  
  class Program  
  {  
    static void Main(string[] args)  
    {  
      string connectionString = GetConnectionString();  
  
      using (SqlConnection awConnection =   
        new SqlConnection(connectionString))  
      {  
        // StatisticsEnabled is False by default.  
        // It must be set to True to start the   
        // statistic collection process.  
        awConnection.StatisticsEnabled = true;  
  
        string productSQL = "SELECT * FROM Production.Product";  
        SqlDataAdapter productAdapter =   
          new SqlDataAdapter(productSQL, awConnection);  
  
        DataSet awDataSet = new DataSet();  
  
        awConnection.Open();  
  
        productAdapter.Fill(awDataSet, "ProductTable");  
        // Retrieve the current statistics as  
        // a collection of values at this point  
        // and time.  
        IDictionary currentStatistics =  
          awConnection.RetrieveStatistics();  
  
        Console.WriteLine("Total Counters: " +  
          currentStatistics.Count.ToString());  
        Console.WriteLine();  
  
        // Retrieve a few individual values  
        // related to the previous command.  
        long bytesReceived =  
            (long) currentStatistics["BytesReceived"];  
        long bytesSent =  
            (long) currentStatistics["BytesSent"];  
        long selectCount =  
            (long) currentStatistics["SelectCount"];  
        long selectRows =  
            (long) currentStatistics["SelectRows"];  
  
        Console.WriteLine("BytesReceived: " +  
            bytesReceived.ToString());  
        Console.WriteLine("BytesSent: " +  
            bytesSent.ToString());  
        Console.WriteLine("SelectCount: " +  
            selectCount.ToString());  
        Console.WriteLine("SelectRows: " +  
            selectRows.ToString());  
  
        Console.WriteLine();  
        Console.WriteLine("Press any key to continue");  
        Console.ReadLine();  
      }  
  
    }  
    private static string GetConnectionString()  
    {  
      // To avoid storing the connection string in your code,  
      // you can retrieve it from a configuration file.  
      return "Data Source=localhost;Integrated Security=SSPI;" +   
        "Initial Catalog=AdventureWorks";  
    }  
  }  
}  

Retrieving All Values

The following console application shows how to enable statistics on a connection, retrieve all available statistic values using the enumerator, and write them to the console window.

Note

The following example uses the sample AdventureWorks database included with SQL Server. The connection string provided in the sample code assumes the database is installed and available on the local computer. Modify the connection string as necessary for your environment.

C#
using System;  
using System.Collections;  
using System.Collections.Generic;  
using System.Text;  
using System.Data;  
using System.Data.SqlClient;  
  
namespace CS_Stats_Console_GetAll  
{  
  class Program  
  {  
    static void Main(string[] args)  
    {  
      string connectionString = GetConnectionString();  
  
      using (SqlConnection awConnection =   
        new SqlConnection(connectionString))  
      {  
        // StatisticsEnabled is False by default.  
        // It must be set to True to start the   
        // statistic collection process.  
        awConnection.StatisticsEnabled = true;  
  
        string productSQL = "SELECT * FROM Production.Product";  
        SqlDataAdapter productAdapter =  
            new SqlDataAdapter(productSQL, awConnection);  
  
        DataSet awDataSet = new DataSet();  
  
        awConnection.Open();  
  
        productAdapter.Fill(awDataSet, "ProductTable");  
  
        // Retrieve the current statistics as  
        // a collection of values at this point  
        // and time.  
        IDictionary currentStatistics =  
            awConnection.RetrieveStatistics();  
  
        Console.WriteLine("Total Counters: " +  
            currentStatistics.Count.ToString());  
        Console.WriteLine();  
  
        Console.WriteLine("Key Name and Value");  
  
        // Note the entries are unsorted.  
        foreach (DictionaryEntry entry in currentStatistics)  
        {  
          Console.WriteLine(entry.Key.ToString() +  
              ": " + entry.Value.ToString());  
        }  
  
        Console.WriteLine();  
        Console.WriteLine("Press any key to continue");  
        Console.ReadLine();  
      }  
  
    }  
    private static string GetConnectionString()  
    {  
      // To avoid storing the connection string in your code,  
      // you can retrieve it from a configuration file.  
      return "Data Source=localhost;Integrated Security=SSPI;" +   
        "Initial Catalog=AdventureWorks";  
    }  
  }  
}  

See also

SQL Server Express User Instances

Microsoft SQL Server Express Edition (SQL Server Express) supports the user instance feature, which is only available when using the .NET Framework Data Provider for SQL Server (SqlClient). A user instance is a separate instance of the SQL Server Express Database Engine that is generated by a parent instance. User instances allow users who are not administrators on their local computers to attach and connect to SQL Server Express databases. Each instance runs under the security context of the individual user, on a one-instance-per-user basis.

User Instance Capabilities

User instances are useful for users who are running Windows under a least-privilege user account (LUA) because each user has SQL Server system administrator (sysadmin) privileges over the instance running on her computer without needing to run as a Windows administrator as well. Software executing on a user instance with limited permissions cannot make system-wide changes because the instance of SQL Server Express is running under the non-administrator Windows account of the user, not as a service. Each user instance is isolated from its parent instance and from any other user instances running on the same computer. Databases running on a user instance are opened in single-user mode only, and it is not possible for multiple users to connect to databases running on a user instance. Replication and distributed queries are also disabled for user instances.

For more information, see "User Instances" in SQL Server Books Online.

Note

User instances are not needed for users who are already administrators on their own computers, or for scenarios involving multiple database users.

Enabling User Instances

To generate user instances, a parent instance of SQL Server Express must be running. User instances are enabled by default when SQL Server Express is installed, and they can be explicitly enabled or disabled by a system administrator executing the sp_configure system stored procedure on the parent instance.

-- Enable user instances.  
sp_configure 'user instances enabled','1'   
  
-- Disable user instances.  
sp_configure 'user instances enabled','0'  

The network protocol for user instances must be local Named Pipes. A user instance cannot be started on a remote instance of SQL Server, and SQL Server logins are not allowed.

Connecting to a User Instance

The User Instance and AttachDBFilenameConnectionString keywords allow a SqlConnection to connect to a user instance. User instances are also supported by the SqlConnectionStringBuilderUserInstance and AttachDBFilename properties.

Note the following about the sample connection string shown below:

  • The Data Source keyword refers to the parent instance of SQL Server Express that is generating the user instance. The default instance is .\sqlexpress.

  • Integrated Security is set to true. To connect to a user instance, Windows Authentication is required; SQL Server logins are not supported.

  • The User Instance is set to true, which invokes a user instance. (The default is false.)

  • The AttachDbFileName connection string keyword is used to attach the primary database file (.mdf), which must include the full path name. AttachDbFileName also corresponds to the "extended properties" and "initial file name" keys within a SqlConnection connection string.

  • The |DataDirectory| substitution string enclosed in the pipe symbols refers to the data directory of the application opening the connection and provides a relative path indicating the location of the .mdf and .ldf database and log files. If you want to locate these files elsewhere, you must provide the full path to the files.

Data Source=.\\SQLExpress;Integrated Security=true;  
User Instance=true;AttachDBFilename=|DataDirectory|\InstanceDB.mdf;  
Initial Catalog=InstanceDB;  

Note

You can also use the SqlConnectionStringBuilderUserInstance and AttachDBFilename properties to build a connection string at run time.

Using the |DataDirectory| Substitution String

AttachDbFileName was extended in ADO.NET 2.0 with the introduction of the |DataDirectory| (enclosed in pipe symbols) substitution string. DataDirectory is used in conjunction with AttachDbFileName to indicate a relative path to a data file, allowing developers to create connection strings that are based on a relative path to the data source instead of being required to specify a full path.

The physical location that DataDirectory points to depends on the type of application. In this example, the Northwind.mdf file to be attached is located in the application's \app_data folder.

Data Source=.\\SQLExpress;Integrated Security=true;  
User Instance=true;  
AttachDBFilename=|DataDirectory|\app_data\Northwind.mdf;  
Initial Catalog=Northwind;  

When DataDirectory is used, the resulting file path cannot be higher in the directory structure than the directory pointed to by the substitution string. For example, if the fully expanded DataDirectory is C:\AppDirectory\app_data, then the sample connection string shown above works because it is below c:\AppDirectory. However, attempting to specify DataDirectory as |DataDirectory|\..\data will result in an error because \data is not a subdirectory of \AppDirectory.

If the connection string has an improperly formatted substitution string, an ArgumentException will be thrown.

Note

System.Data.SqlClient resolves the substitution strings into full paths against the local computer file system. Therefore, remote server, HTTP, and UNC path names are not supported. An exception is thrown when the connection is opened if the server is not located on the local computer.

When the SqlConnection is opened, it is redirected from the default SQL Server Express instance to a run-time initiated instance running under the caller's account.

Note

It may be necessary to increase the ConnectionTimeout value since user instances may take longer to load than regular instances.

The following code fragment opens a new SqlConnection, displays the connection string in the console window, and then closes the connection when exiting the using code block.

C#
private static void OpenSqlConnection()  
{  
    // Retrieve the connection string.  
    string connectionString = GetConnectionString();  
  
    using (SqlConnection connection =   
        new SqlConnection(connectionString))  
    {  
        connection.Open();  
        Console.WriteLine("ConnectionString: {0}",   
             connection.ConnectionString);  
    }  
}  

Note

User instances are not supported in common language runtime (CLR) code that is running inside of SQL Server. An InvalidOperationException is thrown if Open is called on a SqlConnection that has User Instance=true in the connection string.

Lifetime of a User Instance Connection

Unlike versions of SQL Server that run as a service, SQL Server Express instances do not need to be manually started and stopped. Each time a user logs in and connects to a user instance, the user instance is started if it is not already running. User instance databases have the AutoClose option set so that the database is automatically shut down after a period of inactivity. The sqlservr.exe process that is started is kept running for a limited time-out period after the last connection to the instance is closed, so it does not need to be restarted if another connection is opened before the time-out has expired. The user instance automatically shuts down if no new connection opens before that time-out period has expired. A system administrator on the parent instance can set the duration of the time-out period for a user instance by using sp_configure to change the user instance timeout option. The default is 60 minutes.

Note

If Min Pool Size is used in the connection string with a value greater than zero, the connection pooler will always maintain a few opened connections, and the user instance will not automatically shut down.

How User Instances Work

The first time a user instance is generated for each user, the master and msdb system databases are copied from the Template Data folder to a path under the user's local application data repository directory for exclusive use by the user instance. This path is typically C:\Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\Microsoft SQL Server Data\SQLEXPRESS. When a user instance starts up, the tempdb, log, and trace files are also written to this directory. A name is generated for the instance, which is guaranteed to be unique for each user.

By default all members of the Windows Builtin\Users group are granted permissions to connect on the local instance as well as read and execute permissions on the SQL Server binaries. Once the credentials of the calling user hosting the user instance have been verified, that user becomes the sysadmin on that instance. Only shared memory is enabled for user instances, which means that only operations on the local machine are possible.

Users must be granted both read and write permissions on the .mdf and .ldf files specified in the connection string.

Note

The .mdf and .ldf files represent the database and log files, respectively. These two files are a matched set, so care must be taken during backup and restore operations. The database file contains information about the exact version of the log file, and the database will not open if it is coupled with the wrong log file.

To avoid data corruption, a database in the user instance is opened with exclusive access. If two different user instances share the same database on the same computer, the user on the first instance must close the database before it can be opened in a second instance.

User Instance Scenarios

User instances provide developers of database applications with a SQL Server data store that does not depend on developers having administrative accounts on their development computers. User instances are based on the Access/Jet model, where the database application simply connects to a file, and the user automatically has full permissions on all of the database objects without needing the intervention of a system administrator to grant permissions. It is intended to work in situations where the user is running under a least-privilege user account (LUA) and does not have administrative privileges on the server or local machine, yet needs to create database objects and applications. User instances allow users to create instances at run time that run under the user's own security context, and not in the security context of a more privileged system service.

Important

User instances should only be used in scenarios where all the applications using it are fully trusted.

User instance scenarios include:

  • Any single-user application where sharing data is not required.

  • ClickOnce deployment. If the .NET Framework 2.0 (or later) and SQL Server Express are already installed on the target computer, the installation package downloaded as a result of a ClickOnce action can be installed and used by non-administrator users. Note that an administrator must install SQL Server Express if that is part of the setup. For more information, see ClickOnce Deployment for Windows Forms.

  • Dedicated ASP.NET hosting using Windows Authentication. A single SQL Server Express instance can be hosted on an intranet. The application connects using the ASPNET Windows account, not by using impersonation. User instances should not be used for third-party or shared hosting scenarios where all applications would share the same user instance and would no longer remain isolated from each other.

See also

Database Mirroring in SQL Server

Database mirroring in SQL Server allows you to keep a copy, or mirror, of a SQL Server database on a standby server. Mirroring ensures that two separate copies of the data exist at all times, providing high availability and complete data redundancy. The .NET Data Provider for SQL Server provides implicit support for database mirroring, so that the developer does not need to take any action or write any code once it has been configured for a SQL Server database. In addition, the SqlConnection object supports an explicit connection mode that allows supplying the name of a failover partner server in the ConnectionString.

The following simplified sequence of events occurs for a SqlConnection object that targets a database configured for mirroring:

  1. The client application successfully connects to the principal database, and the server sends back the name of the partner server, which is then cached on the client.

  2. If the server containing the principal database fails or connectivity is interrupted, connection and transaction state is lost. The client application attempts to re-establish a connection to the principal database and fails.

  3. The client application then transparently attempts to establish a connection to the mirror database on the partner server. If it succeeds, the connection is redirected to the mirror database, which then becomes the new principal database.

Specifying the Failover Partner in the Connection String

If you supply the name of a failover partner server in the connection string, the client will transparently attempt a connection with the failover partner if the principal database is unavailable when the client application first connects.

";Failover Partner=PartnerServerName"  

If you omit the name of the failover partner server and the principal database is unavailable when the client application first connects then a SqlException is raised.

When a SqlConnection is successfully opened, the failover partner name is returned by the server and supersedes any values supplied in the connection string.

Note

You must explicitly specify the initial catalog or database name in the connection string for database mirroring scenarios. If the client receives failover information on a connection that doesn't have an explicitly specified initial catalog or database, the failover information is not cached and the application does not attempt to fail over if the principal server fails. If a connection string has a value for the failover partner, but no value for the initial catalog or database, an InvalidArgumentException is raised.

Retrieving the Current Server Name

In the event of a failover, you can retrieve the name of the server to which the current connection is actually connected by using the DataSource property of a SqlConnection object. The following code fragment retrieves the name of the active server, assuming that the connection variable references an open SqlConnection.

When a failover event occurs and the connection is switched to the mirror server, the DataSource property is updated to reflect the mirror name.

C#
string activeServer = connection.DataSource;  

SqlClient Mirroring Behavior

The client always tries to connect to the current principal server. If it fails, it tries the failover partner. If the mirror database has already been switched to the principal role on the partner server, the connection succeeds and the new principal-mirror mapping is sent to the client and cached for the lifetime of the calling AppDomain. It is not stored in persistent storage and is not available for subsequent connections in a different AppDomain or process. However, it is available for subsequent connections within the same AppDomain. Note that another AppDomain or process running on the same or a different computer always has its pool of connections, and those connections are not reset. In that case, if the primary database goes down, each process or AppDomain fails once, and the pool is automatically cleared.

Note

Mirroring support on the server is configured on a per-database basis. If data manipulation operations are executed against other databases not included in the principal/mirror set, either by using multipart names or by changing the current database, the changes to these other databases do not propagate in the event of failure. No error is generated when data is modified in a database that is not mirrored. The developer must evaluate the possible impact of such operations.

Database Mirroring Resources

For conceptual documentation and information on configuring, deploying and administering mirroring, see the following resources in SQL Server documentation.

Resource Description
Database Mirroring Describes how to set up and configure mirroring in SQL Server.

See also

SQL Server Common Language Runtime Integration

SQL Server 2005 introduced the integration of the common language runtime (CLR) component of the .NET Framework for Microsoft Windows. This means that you can write stored procedures, triggers, user-defined types, user-defined functions, user-defined aggregates, and streaming table-valued functions, using any .NET Framework language, including Microsoft Visual Basic .NET and Microsoft Visual C#. The Microsoft.SqlServer.Server namespace contains a set of new application programming interfaces (APIs) so that managed code can interact with the Microsoft SQL Server environment.

This section describes features and behaviors that are specific to SQL Server common language runtime (CLR) integration and the SQL Server in-process specific extensions to ADO.NET.

This section is meant to provide only enough information to get started programming with SQL Server CLR integration, and is not meant to be comprehensive. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

  1. Common Language Runtime (CLR) Integration Programming Concepts

In This Section

Introduction to SQL Server CLR Integration
Provides an introduction to SQL Server CLR integration. Provides links to additional topics.

CLR User-Defined Functions
Describes how to implement and use the various types of CLR functions: table-valued, scalar, and user-defined aggregate functions.

CLR User-Defined Types
Describes how to implement and use CLR user-defined types. Provides links to additional topics.

CLR Stored Procedures
Describes how to implement and use CLR stored procedures. Provides links to additional topics.

CLR Triggers
Describes how to implement and use CLR triggers. Provides links to additional topics.

The Context Connection
Describes the context connection.

SQL Server In-Process-Specific Behavior of ADO.NET
Describes the SQL Server in-process specific extensions to ADO.NET, and the context connection. Provides links to additional topics.

See also

Introduction to SQL Server CLR Integration

The common language runtime (CLR) is the heart of the Microsoft .NET Framework and provides the execution environment for all .NET Framework code. Code that runs within the CLR is referred to as managed code. The CLR provides various functions and services required for program execution, including just-in-time (JIT) compilation, allocating and managing memory, enforcing type safety, exception handling, thread management, and security.

With the CLR hosted in Microsoft SQL Server (called CLR integration), you can author stored procedures, triggers, user-defined functions, user-defined types, and user-defined aggregates in managed code. Because managed code compiles to native code prior to execution, you can achieve significant performance increases in some scenarios.

Managed code uses Code Access Security (CAS), code links, and application domains to prevent assemblies from performing certain operations. SQL Server uses CAS to help secure the managed code and prevent compromise of the operating system or database server.

This section is meant to provide only enough information to get started programming with SQL Server CLR integration, and is not meant to be comprehensive. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

Enabling CLR Integration

The common language runtime (CLR) integration feature is off by default in Microsoft SQL Server, and must be enabled in order to use objects that are implemented using CLR integration. To enable CLR integration using Transact-SQL, use the clr enabled option of the sp_configure stored procedure as shown:

sp_configure 'clr enabled', 1  
GO  
RECONFIGURE  
GO  

You can disable CLR integration by setting the clr enabled option to 0. When you disable CLR integration, SQL Server stops executing all CLR routines and unloads all application domains.

For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

Deploying a CLR Assembly

Once the CLR methods have been tested and verified on the test server, they can be distributed to production servers using a deployment script. The deployment script can be generated manually, or by using SQL Server Management Studio. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

  1. Deploying CLR Database Objects

CLR Integration Security

The security model of the Microsoft SQL Server integration with the Microsoft .NET Framework common language runtime (CLR) manages and secures access between different types of CLR and non-CLR objects running within SQL Server. These objects may be called by a Transact-SQL statement or another CLR object running in the server.

For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

Debugging a CLR Assembly

Microsoft SQL Server provides support for debugging Transact-SQL and common language runtime (CLR) objects in the database. Debugging works across languages: users can step seamlessly into CLR objects from Transact-SQL, and vice versa.

For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

See also

CLR User-Defined Functions

User-defined functions are routines that can take parameters, perform calculations or other actions, and return a result. You can write user-defined functions in any Microsoft .NET Framework programming language, such as Microsoft Visual Basic .NET or Microsoft Visual C#.

For more detailed information, see CLR User-Defined Functions.

See also

CLR User-Defined Types

Microsoft SQL Server provides support for user-defined types (UDTs) implemented with the Microsoft .NET Framework common language runtime (CLR). The CLR is integrated into SQL Server, and this mechanism enables you to extend the type system of the database. UDTs provide user extensibility of the SQL Server data type system, and also the ability to define complex structured types.

UDTs can provide two key benefits from an application architecture perspective:

  • Strong encapsulation (both in the client and the server) between the internal state and the external behaviors.

  • Deep integration with other related server features. Once you define your own UDT, you can use it in all contexts where you can use a system type in SQL Server, including column definitions, and as variables, parameters, function results, cursors, triggers, and replication.

For more detailed information, see the SQL Server documentation for the version of SQL Server you're using.

SQL Server documentation

  1. CLR User-Defined Types

See also

CLR Stored Procedures

Stored procedures are routines that cannot be used in scalar expressions. They can return tabular results and messages to the client, invoke data definition language (DDL) and data manipulation language (DML) statements, and return output parameters.

Note

Microsoft Visual Basic does not support output parameters in the same way that Microsoft Visual C# does. You must specify to pass the parameter by reference and apply the <Out()> attribute to represent an output parameter, as in the following:

VB
Public Shared Sub ExecuteToClient( <Out()> ByRef number As Integer)  

For more detailed information, see the version of SQL Server documentation for the version of SQL Server you're using.

SQL Server documentation

  1. CLR Stored Procedures

See also

CLR Triggers

A trigger is a special type of stored procedure that automatically runs when a language event executes. Because of the Microsoft SQL Server integration with the .NET Framework common language runtime (CLR), you can use any .NET Framework language to create CLR triggers.

For more detailed information, see the SQL Server documentation for the version of SQL Server you're using.

SQL Server documentation

  1. CLR Triggers

See also

The Context Connection

The problem of internal data access is a fairly common scenario. That is, you wish to access the same server on which your common language runtime (CLR) stored procedure or function is executing. One option is to create a connection using SqlConnection, specify a connection string that points to the local server, and open the connection. This requires specifying credentials for logging in. The connection is in a different database session than the stored procedure or function, it may have different SET options, it is in a separate transaction, it does not see your temporary tables, and so on. If your managed stored procedure or function code is executing in the SQL Server process, it is because someone connected to that server and executed a SQL statement to invoke it. You probably want the stored procedure or function to execute in the context of that connection, along with its transaction, SET options, and so on. This is called the context connection.

The context connection lets you execute Transact-SQL statements in the same context that your code was invoked in the first place. For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

  1. The Context Connection

See also

SQL Server In-Process-Specific Behavior of ADO.NET

There are four main functional extensions to ADO.NET, found in the Microsoft.SqlServer.Server namespace, that are specifically for in-process use: SqlContext, SqlPipe, SqlTriggerContext, and SqlDataRecord.

For more detailed information, see the version of SQL Server Books Online for the version of SQL Server you are using.

SQL Server Books Online

  1. SQL Server In-Process Specific Extensions to ADO.NET

See also

Query Notifications in SQL Server

Built upon the Service Broker infrastructure, query notifications allow applications to be notified when data has changed. This feature is particularly useful for applications that provide a cache of information from a database, such as a Web application, and need to be notified when the source data is changed.

There are three ways you can implement query notifications using ADO.NET:

  1. The low-level implementation is provided by the SqlNotificationRequest class that exposes server-side functionality, enabling you to execute a command with a notification request.

  2. The high-level implementation is provided by the SqlDependency class, which is a class that provides a high-level abstraction of notification functionality between the source application and SQL Server, enabling you to use a dependency to detect changes in the server. In most cases, this is the simplest and most effective way to leverage SQL Server notifications capability by managed client applications using the .NET Framework Data Provider for SQL Server.

  3. In addition, Web applications built using ASP.NET 2.0 or later can use the SqlCacheDependency helper classes.

Query notifications are used for applications that need to refresh displays or caches in response to changes in underlying data. Microsoft SQL Server allows .NET Framework applications to send a command to SQL Server and request notification if executing the same command would produce result sets different from those initially retrieved. Notifications generated at the server are sent through queues to be processed later.

You can set up notifications for SELECT and EXECUTE statements. When using an EXECUTE statement, SQL Server registers a notification for the command executed rather than the EXECUTE statement itself. The command must meet the requirements and limitations for a SELECT statement. When a command that registers a notification contains more than one statement, the Database Engine creates a notification for each statement in the batch.

If you are developing an application where you need reliable sub-second notifications when data changes, review the sections Planning an Efficient Query Notifications Strategy and Alternatives to Query Notifications in the Planning for Notifications topic in SQL Server Books Online. For more information about Query Notifications and SQL Server Service Broker, see the following links to topics in SQL Server Books Online.

SQL Server documentation

In This Section

Enabling Query Notifications
Discusses how to use query notifications, including the requirements for enabling and using them.

SqlDependency in an ASP.NET Application
Demonstrates how to use query notifications from an ASP.NET application.

Detecting Changes with SqlDependency
Demonstrates how to detect when query results will be different from those originally received.

SqlCommand Execution with a SqlNotificationRequest
Demonstrates configuring a SqlCommand object to work with a query notification.

Reference

SqlNotificationRequest
Describes the SqlNotificationRequest class and all of its members.

SqlDependency
Describes the SqlDependency class and all of its members.

SqlCacheDependency
Describes the SqlCacheDependency class and all of its members.

See also

Enabling Query Notifications

Applications that consume query notifications have a common set of requirements. Your data source must be correctly configured to support SQL query notifications, and the user must have the correct client-side and server-side permissions.

To use query notifications you must:

  • Enable query notifications for your database.

  • Ensure that the user ID used to connect to the database has the necessary permissions.

  • Use a SqlCommand object to execute a valid SELECT statement with an associated notification object—either SqlDependency or SqlNotificationRequest.

  • Provide code to process the notification if the data being monitored changes.

Query Notifications Requirements

Query notifications are supported only for SELECT statements that meet a list of specific requirements. The following table provides links to the Service Broker and Query Notifications documentation in SQL Server Books Online.

SQL Server documentation

Enabling Query Notifications to Run Sample Code

To enable Service Broker on the AdventureWorks database by using SQL Server Management Studio, execute the following Transact-SQL statement:

ALTER DATABASE AdventureWorks SET ENABLE_BROKER;

For the query notification samples to run correctly, the following Transact-SQL statements must be executed on the database server.

CREATE QUEUE ContactChangeMessages;  
  
CREATE SERVICE ContactChangeNotifications  
  ON QUEUE ContactChangeMessages  
([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]);  

Query Notifications Permissions

Users who execute commands requesting notification must have SUBSCRIBE QUERY NOTIFICATIONS database permission on the server.

Client-side code that runs in a partial trust situation requires the SqlClientPermission.

The following code creates a SqlClientPermission object, setting the PermissionState to Unrestricted. The Demand will force a SecurityException at run time if all callers higher in the call stack have not been granted the permission.

C#
// Code requires directives to
// System.Security.Permissions and
// System.Data.SqlClient

private bool CanRequestNotifications()
{
    SqlClientPermission permission =
        new SqlClientPermission(
        PermissionState.Unrestricted);
    try
    {
        permission.Demand();
        return true;
    }
    catch (System.Exception)
    {
        return false;
    }
}

Choosing a Notification Object

The query notifications API provides two objects to process notifications: SqlDependency and SqlNotificationRequest. In general, most non-ASP.NET applications should use the SqlDependency object. ASP.NET applications should use the higher-level SqlCacheDependency, which wraps SqlDependency and provides a framework for administering the notification and cache objects.

Using SqlDependency

To use SqlDependency, Service Broker must be enabled for the SQL Server database being used, and users must have permissions to receive notifications. Service Broker objects, such as the notification queue, are predefined.

In addition, SqlDependency automatically launches a worker thread to process notifications as they are posted to the queue; it also parses the Service Broker message, exposing the information as event argument data. SqlDependency must be initialized by calling the Start method to establish a dependency to the database. This is a static method that needs to be called only once during application initialization for each database connection required. The Stop method should be called at application termination for each dependency connection that was made.

Using SqlNotificationRequest

In contrast, SqlNotificationRequest requires you to implement the entire listening infrastructure yourself. In addition, all the supporting Service Broker objects such as the queue, service, and message types supported by the queue must be defined. This manual approach is useful if your application requires special notification messages or notification behaviors, or if your application is part of a larger Service Broker application.

See also

SqlDependency in an ASP.NET Application

The example in this section shows how to use SqlDependency indirectly by leveraging the ASP.NET SqlCacheDependency object. The SqlCacheDependency object uses a SqlDependency to listen for notifications and correctly update the cache.

Note

The sample code assumes that you have enabled query notifications by executing the scripts in Enabling Query Notifications.

About the Sample Application

The sample application uses a single ASP.NET Web page to display product information from the AdventureWorks SQL Server database in a GridView control. When the page loads, the code writes the current time to a Label control. It then defines a SqlCacheDependency object and sets properties on the Cache object to store the cache data for up to three minutes. The code then connects to the database and retrieves the data. When the page is loaded and the application is running ASP.NET will retrieve data from the cache, which you can verify by noting that the time on the page does not change. If the data being monitored changes, ASP.NET invalidates the cache and repopulate the GridView control with fresh data, updating the time displayed in the Label control.

Creating the Sample Application

Follow these steps to create and run the sample application:

  1. Create a new ASP.NET Web site.

  2. Add a Label and a GridView control to the Default.aspx page.

  3. Open the page's class module and add the following directives:

    C#
    using System.Data.SqlClient;  
    using System.Web.Caching;  
    
  4. Add the following code in the page's Page_Load event:

    C#
    protected void Page_Load(object sender, EventArgs e)
    {
        Label1.Text = "Cache Refresh: " +
        DateTime.Now.ToLongTimeString();
    
        // Create a dependency connection to the database.
        SqlDependency.Start(GetConnectionString());
    
        using (SqlConnection connection =
            new SqlConnection(GetConnectionString()))
        {
            using (SqlCommand command =
                new SqlCommand(GetSQL(), connection))
            {
                SqlCacheDependency dependency =
                    new SqlCacheDependency(command);
                // Refresh the cache after the number of minutes
                // listed below if a change does not occur.
                // This value could be stored in a configuration file.
                int numberOfMinutes = 3;
                DateTime expires =
                    DateTime.Now.AddMinutes(numberOfMinutes);
    
                Response.Cache.SetExpires(expires);
                Response.Cache.SetCacheability(HttpCacheability.Public);
                Response.Cache.SetValidUntilExpires(true);
    
                Response.AddCacheDependency(dependency);
    
                connection.Open();
    
                GridView1.DataSource = command.ExecuteReader();
                GridView1.DataBind();
            }
        }
    }
    
  5. Add two helper methods, GetConnectionString and GetSQL. The connection string defined uses integrated security. You will need to verify that the account you are using has the necessary database permissions and that the sample database, AdventureWorks, has notifications enabled.

    C#
    private string GetConnectionString()
    {
        // To avoid storing the connection string in your code,
        // you can retrieve it from a configuration file.
        return "Data Source=(local);Integrated Security=true;" +
          "Initial Catalog=AdventureWorks;";
    }
    private string GetSQL()
    {
        return "SELECT Production.Product.ProductID, " +
        "Production.Product.Name, " +
        "Production.Location.Name AS Location, " +
        "Production.ProductInventory.Quantity " +
        "FROM Production.Product INNER JOIN " +
        "Production.ProductInventory " +
        "ON Production.Product.ProductID = " +
        "Production.ProductInventory.ProductID " +
        "INNER JOIN Production.Location " +
        "ON Production.ProductInventory.LocationID = " +
        "Production.Location.LocationID " +
        "WHERE ( Production.ProductInventory.Quantity <= 100 ) " +
        "ORDER BY Production.ProductInventory.Quantity, " +
        "Production.Product.Name;";
    }
    

Testing the Application

The application caches the data displayed on the Web form and refreshes it every three minutes if there is no activity. If a change occurs to the database, the cache is refreshed immediately. Run the application from Visual Studio, which loads the page into the browser. The cache refresh time displayed indicates when the cache was last refreshed. Wait three minutes, and then refresh the page, causing a postback event to occur. Note that the time displayed on the page has changed. If you refresh the page in less than three minutes, the time displayed on the page will remain the same.

Now update the data in the database, using a Transact-SQL UPDATE command and refresh the page. The time displayed now indicates that the cache was refreshed with the new data from the database. Note that although the cache is updated, the time displayed on the page does not change until a postback event occurs.

See also

Detecting Changes with SqlDependency

A SqlDependency object can be associated with a SqlCommand in order to detect when query results differ from those originally retrieved. You can also assign a delegate to the OnChange event, which will fire when the results change for an associated command. You must associate the SqlDependency with the command before you execute the command. The HasChanges property of the SqlDependency can also be used to determine if the query results have changed since the data was first retrieved.

Security Considerations

The dependency infrastructure relies on a SqlConnection that is opened when Start is called in order to receive notifications that the underlying data has changed for a given command. The ability for a client to initiate the call to SqlDependency.Start is controlled through the use of SqlClientPermission and code access security attributes. For more information, see Enabling Query Notifications and Code Access Security and ADO.NET.

Example

The following steps illustrate how to declare a dependency, execute a command, and receive a notification when the result set changes:

  1. Initiate a SqlDependency connection to the server.

  2. Create SqlConnection and SqlCommand objects to connect to the server and define a Transact-SQL statement.

  3. Create a new SqlDependency object, or use an existing one, and bind it to the SqlCommand object. Internally, this creates a SqlNotificationRequest object and binds it to the command object as needed. This notification request contains an internal identifier that uniquely identifies this SqlDependency object. It also starts the client listener if it is not already active.

  4. Subscribe an event handler to the OnChange event of the SqlDependency object.

  5. Execute the command using any of the Execute methods of the SqlCommand object. Because the command is bound to the notification object, the server recognizes that it must generate a notification, and the queue information will point to the dependencies queue.

  6. Stop the SqlDependency connection to the server.

If any user subsequently changes the underlying data, Microsoft SQL Server detects that there is a notification pending for such a change, and posts a notification that is processed and forwarded to the client through the underlying SqlConnection that was created by calling SqlDependency.Start. The client listener receives the invalidation message. The client listener then locates the associated SqlDependency object and fires the OnChange event.

The following code fragment shows the design pattern you would use to create a sample application.

C#
void Initialization()
{
    // Create a dependency connection.
    SqlDependency.Start(connectionString, queueName);
}

void SomeMethod()
{
    // Assume connection is an open SqlConnection.

    // Create a new SqlCommand object.
    using (SqlCommand command=new SqlCommand(
        "SELECT ShipperID, CompanyName, Phone FROM dbo.Shippers",
        connection))
    {

        // Create a dependency and associate it with the SqlCommand.
        SqlDependency dependency=new SqlDependency(command);
        // Maintain the reference in a class member.

        // Subscribe to the SqlDependency event.
        dependency.OnChange+=new
           OnChangeEventHandler(OnDependencyChange);

        // Execute the command.
        using (SqlDataReader reader = command.ExecuteReader())
        {
            // Process the DataReader.
        }
    }
}

// Handler method
void OnDependencyChange(object sender,
   SqlNotificationEventArgs e )
{
  // Handle the event (for example, invalidate this cache entry).
}

void Termination()
{
    // Release the dependency.
    SqlDependency.Stop(connectionString, queueName);
}

See also

SqlCommand Execution with a SqlNotificationRequest

A SqlCommand can be configured to generate a notification when data changes after it has been fetched from the server and the result set would be different if the query were executed again. This is useful for scenarios where you want to use custom notification queues on the server or when you do not want to maintain live objects.

Creating the Notification Request

You can use a SqlNotificationRequest object to create the notification request by binding it to a SqlCommand object. Once the request is created, you no longer need the SqlNotificationRequest object. You can query the queue for any notifications and respond appropriately. Notifications can occur even if the application is shut down and subsequently restarted.

When the command with the associated notification is executed, any changes to the original result set trigger sending a message to the SQL Server queue that was configured in the notification request.

How you poll the SQL Server queue and interpret the message is specific to your application. The application is responsible for polling the queue and reacting based on the contents of the message.

Note

When using SQL Server notification requests with SqlDependency, create your own queue name instead of using the default service name.

There are no new client-side security elements for SqlNotificationRequest. This is primarily a server feature, and the server has created special privileges that users must have to request a notification.

Example

The following code fragment demonstrates how to create a SqlNotificationRequest and associate it with a SqlCommand.

C#
// Assume connection is an open SqlConnection.
// Create a new SqlCommand object.
SqlCommand command=new SqlCommand(
 "SELECT ShipperID, CompanyName, Phone FROM dbo.Shippers", connection);

// Create a SqlNotificationRequest object.
SqlNotificationRequest notificationRequest=new SqlNotificationRequest();
notificationRequest.id="NotificationID";
notificationRequest.Service="mySSBQueue";

// Associate the notification request with the command.
command.Notification=notificationRequest;
// Execute the command.
command.ExecuteReader();
// Process the DataReader.
// You can use Transact-SQL syntax to periodically poll the
// SQL Server queue to see if you have a new message.

See also

Snapshot Isolation in SQL Server

Snapshot isolation enhances concurrency for OLTP applications.

Understanding Snapshot Isolation and Row Versioning

Once snapshot isolation is enabled, updated row versions for each transaction are maintained in tempdb. A unique transaction sequence number identifies each transaction, and these unique numbers are recorded for each row version. The transaction works with the most recent row versions having a sequence number before the sequence number of the transaction. Newer row versions created after the transaction has begun are ignored by the transaction.

The term "snapshot" reflects the fact that all queries in the transaction see the same version, or snapshot, of the database, based on the state of the database at the moment in time when the transaction begins. No locks are acquired on the underlying data rows or data pages in a snapshot transaction, which permits other transactions to execute without being blocked by a prior uncompleted transaction. Transactions that modify data do not block transactions that read data, and transactions that read data do not block transactions that write data, as they normally would under the default READ COMMITTED isolation level in SQL Server. This non-blocking behavior also significantly reduces the likelihood of deadlocks for complex transactions.

Snapshot isolation uses an optimistic concurrency model. If a snapshot transaction attempts to commit modifications to data that has changed since the transaction began, the transaction will roll back and an error will be raised. You can avoid this by using UPDLOCK hints for SELECT statements that access data to be modified. See "Locking Hints" in SQL Server Books Online for more information.

Snapshot isolation must be enabled by setting the ALLOW_SNAPSHOT_ISOLATION ON database option before it is used in transactions. This activates the mechanism for storing row versions in the temporary database (tempdb). You must enable snapshot isolation in each database that uses it with the Transact-SQL ALTER DATABASE statement. In this respect, snapshot isolation differs from the traditional isolation levels of READ COMMITTED, REPEATABLE READ, SERIALIZABLE, and READ UNCOMMITTED, which require no configuration. The following statements activate snapshot isolation and replace the default READ COMMITTED behavior with SNAPSHOT:

SQL
ALTER DATABASE MyDatabase  
SET ALLOW_SNAPSHOT_ISOLATION ON  
  
ALTER DATABASE MyDatabase  
SET READ_COMMITTED_SNAPSHOT ON  

Setting the READ_COMMITTED_SNAPSHOT ON option allows access to versioned rows under the default READ COMMITTED isolation level. If the READ_COMMITTED_SNAPSHOT option is set to OFF, you must explicitly set the Snapshot isolation level for each session in order to access versioned rows.

Managing Concurrency with Isolation Levels

The isolation level under which a Transact-SQL statement executes determines its locking and row versioning behavior. An isolation level has connection-wide scope, and once set for a connection with the SET TRANSACTION ISOLATION LEVEL statement, it remains in effect until the connection is closed or another isolation level is set. When a connection is closed and returned to the pool, the isolation level from the last SET TRANSACTION ISOLATION LEVEL statement is retained. Subsequent connections reusing a pooled connection use the isolation level that was in effect at the time the connection is pooled.

Individual queries issued within a connection can contain lock hints that modify the isolation for a single statement or transaction but do not affect the isolation level of the connection. Isolation levels or lock hints set in stored procedures or functions do not change the isolation level of the connection that calls them and are in effect only for the duration of the stored procedure or function call.

Four isolation levels defined in the SQL-92 standard were supported in early versions of SQL Server:

  • READ UNCOMMITTED is the least restrictive isolation level because it ignores locks placed by other transactions. Transactions executing under READ UNCOMMITTED can read modified data values that have not yet been committed by other transactions; these are called "dirty" reads.

  • READ COMMITTED is the default isolation level for SQL Server. It prevents dirty reads by specifying that statements cannot read data values that have been modified but not yet committed by other transactions. Other transactions can still modify, insert, or delete data between executions of individual statements within the current transaction, resulting in non-repeatable reads, or "phantom" data.

  • REPEATABLE READ is a more restrictive isolation level than READ COMMITTED. It encompasses READ COMMITTED and additionally specifies that no other transactions can modify or delete data that has been read by the current transaction until the current transaction commits. Concurrency is lower than for READ COMMITTED because shared locks on read data are held for the duration of the transaction instead of being released at the end of each statement.

  • SERIALIZABLE is the most restrictive isolation level, because it locks entire ranges of keys and holds the locks until the transaction is complete. It encompasses REPEATABLE READ and adds the restriction that other transactions cannot insert new rows into ranges that have been read by the transaction until the transaction is complete.

For more information, refer to the Transaction Locking and Row Versioning Guide.

Snapshot Isolation Level Extensions

SQL Server introduced extensions to the SQL-92 isolation levels with the introduction of the SNAPSHOT isolation level and an additional implementation of READ COMMITTED. The READ_COMMITTED_SNAPSHOT isolation level can transparently replace READ COMMITTED for all transactions.

  • SNAPSHOT isolation specifies that data read within a transaction will never reflect changes made by other simultaneous transactions. The transaction uses the data row versions that exist when the transaction begins. No locks are placed on the data when it is read, so SNAPSHOT transactions do not block other transactions from writing data. Transactions that write data do not block snapshot transactions from reading data. You need to enable snapshot isolation by setting the ALLOW_SNAPSHOT_ISOLATION database option in order to use it.

  • The READ_COMMITTED_SNAPSHOT database option determines the behavior of the default READ COMMITTED isolation level when snapshot isolation is enabled in a database. If you do not explicitly specify READ_COMMITTED_SNAPSHOT ON, READ COMMITTED is applied to all implicit transactions. This produces the same behavior as setting READ_COMMITTED_SNAPSHOT OFF (the default). When READ_COMMITTED_SNAPSHOT OFF is in effect, the Database Engine uses shared locks to enforce the default isolation level. If you set the READ_COMMITTED_SNAPSHOT database option to ON, the database engine uses row versioning and snapshot isolation as the default, instead of using locks to protect the data.

How Snapshot Isolation and Row Versioning Work

When the SNAPSHOT isolation level is enabled, each time a row is updated, the SQL Server Database Engine stores a copy of the original row in tempdb, and adds a transaction sequence number to the row. The following is the sequence of events that occurs:

  • A new transaction is initiated, and it is assigned a transaction sequence number.

  • The Database Engine reads a row within the transaction and retrieves the row version from tempdb whose sequence number is closest to, and lower than, the transaction sequence number.

  • The Database Engine checks to see if the transaction sequence number is not in the list of transaction sequence numbers of the uncommitted transactions active when the snapshot transaction started.

  • The transaction reads the version of the row from tempdb that was current as of the start of the transaction. It will not see new rows inserted after the transaction was started because those sequence number values will be higher than the value of the transaction sequence number.

  • The current transaction will see rows that were deleted after the transaction began, because there will be a row version in tempdb with a lower sequence number value.

The net effect of snapshot isolation is that the transaction sees all of the data as it existed at the start of the transaction, without honoring or placing any locks on the underlying tables. This can result in performance improvements in situations where there is contention.

A snapshot transaction always uses optimistic concurrency control, withholding any locks that would prevent other transactions from updating rows. If a snapshot transaction attempts to commit an update to a row that was changed after the transaction began, the transaction is rolled back, and an error is raised.

Working with Snapshot Isolation in ADO.NET

Snapshot isolation is supported in ADO.NET by the SqlTransaction class. If a database has been enabled for snapshot isolation but is not configured for READ_COMMITTED_SNAPSHOT ON, you must initiate a SqlTransaction using the IsolationLevel.Snapshot enumeration value when calling the BeginTransaction method. This code fragment assumes that connection is an open SqlConnection object.

C#
SqlTransaction sqlTran =   
  connection.BeginTransaction(IsolationLevel.Snapshot);  

Example

The following example demonstrates how the different isolation levels behave by attempting to access locked data, and it is not intended to be used in production code.

The code connects to the AdventureWorks sample database in SQL Server and creates a table named TestSnapshot and inserts one row of data. The code uses the ALTER DATABASE Transact-SQL statement to turn on snapshot isolation for the database, but it does not set the READ_COMMITTED_SNAPSHOT option, leaving the default READ COMMITTED isolation-level behavior in effect. The code then performs the following actions:

  • It begins, but does not complete, sqlTransaction1, which uses the SERIALIZABLE isolation level to start an update transaction. This has the effect of locking the table.

  • It opens a second connection and initiates a second transaction using the SNAPSHOT isolation level to read the data in the TestSnapshot table. Because snapshot isolation is enabled, this transaction can read the data that existed before sqlTransaction1 started.

  • It opens a third connection and initiates a transaction using the READ COMMITTED isolation level to attempt to read the data in the table. In this case, the code cannot read the data because it cannot read past the locks placed on the table in the first transaction and times out. The same result would occur if the REPEATABLE READ and SERIALIZABLE isolation levels were used because these isolation levels also cannot read past the locks placed in the first transaction.

  • It opens a fourth connection and initiates a transaction using the READ UNCOMMITTED isolation level, which performs a dirty read of the uncommitted value in sqlTransaction1. This value may never actually exist in the database if the first transaction is not committed.

  • It rolls back the first transaction and cleans up by deleting the TestSnapshot table and turning off snapshot isolation for the AdventureWorks database.

Note

The following examples use the same connection string with connection pooling turned off. If a connection is pooled, resetting its isolation level does not reset the isolation level at the server. As a result, subsequent connections that use the same pooled inner connection start with their isolation levels set to that of the pooled connection. An alternative to turning off connection pooling is to set the isolation level explicitly for each connection.

C#
// Assumes GetConnectionString returns a valid connection string
// where pooling is turned off by setting Pooling=False;. 
string connectionString = GetConnectionString();
using (SqlConnection connection1 = new SqlConnection(connectionString))
{
    // Drop the TestSnapshot table if it exists
    connection1.Open();
    SqlCommand command1 = connection1.CreateCommand();
    command1.CommandText = "IF EXISTS "
        + "(SELECT * FROM sys.tables WHERE name=N'TestSnapshot') "
        + "DROP TABLE TestSnapshot";
    try
    {
        command1.ExecuteNonQuery();
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
    }
    // Enable Snapshot isolation
    command1.CommandText =
        "ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON";
    command1.ExecuteNonQuery();

    // Create a table named TestSnapshot and insert one row of data
    command1.CommandText =
        "CREATE TABLE TestSnapshot (ID int primary key, valueCol int)";
    command1.ExecuteNonQuery();
    command1.CommandText =
        "INSERT INTO TestSnapshot VALUES (1,1)";
    command1.ExecuteNonQuery();

    // Begin, but do not complete, a transaction to update the data 
    // with the Serializable isolation level, which locks the table
    // pending the commit or rollback of the update. The original 
    // value in valueCol was 1, the proposed new value is 22.
    SqlTransaction transaction1 =
        connection1.BeginTransaction(IsolationLevel.Serializable);
    command1.Transaction = transaction1;
    command1.CommandText =
        "UPDATE TestSnapshot SET valueCol=22 WHERE ID=1";
    command1.ExecuteNonQuery();

    // Open a second connection to AdventureWorks
    using (SqlConnection connection2 = new SqlConnection(connectionString))
    {
        connection2.Open();
        // Initiate a second transaction to read from TestSnapshot
        // using Snapshot isolation. This will read the original 
        // value of 1 since transaction1 has not yet committed.
        SqlCommand command2 = connection2.CreateCommand();
        SqlTransaction transaction2 =
            connection2.BeginTransaction(IsolationLevel.Snapshot);
        command2.Transaction = transaction2;
        command2.CommandText =
            "SELECT ID, valueCol FROM TestSnapshot";
        SqlDataReader reader2 = command2.ExecuteReader();
        while (reader2.Read())
        {
            Console.WriteLine("Expected 1,1 Actual "
                + reader2.GetValue(0).ToString()
                + "," + reader2.GetValue(1).ToString());
        }
        transaction2.Commit();
    }

    // Open a third connection to AdventureWorks and
    // initiate a third transaction to read from TestSnapshot
    // using ReadCommitted isolation level. This transaction
    // will not be able to view the data because of 
    // the locks placed on the table in transaction1
    // and will time out after 4 seconds.
    // You would see the same behavior with the
    // RepeatableRead or Serializable isolation levels.
    using (SqlConnection connection3 = new SqlConnection(connectionString))
    {
        connection3.Open();
        SqlCommand command3 = connection3.CreateCommand();
        SqlTransaction transaction3 =
            connection3.BeginTransaction(IsolationLevel.ReadCommitted);
        command3.Transaction = transaction3;
        command3.CommandText =
            "SELECT ID, valueCol FROM TestSnapshot";
        command3.CommandTimeout = 4;
        try
        {
            SqlDataReader sqldatareader3 = command3.ExecuteReader();
            while (sqldatareader3.Read())
            {
                Console.WriteLine("You should never hit this.");
            }
            transaction3.Commit();
        }
        catch (Exception ex)
        {
            Console.WriteLine("Expected timeout expired exception: "
                + ex.Message);
            transaction3.Rollback();
        }
    }

    // Open a fourth connection to AdventureWorks and
    // initiate a fourth transaction to read from TestSnapshot
    // using the ReadUncommitted isolation level. ReadUncommitted
    // will not hit the table lock, and will allow a dirty read  
    // of the proposed new value 22 for valueCol. If the first
    // transaction rolls back, this value will never actually have
    // existed in the database.
    using (SqlConnection connection4 = new SqlConnection(connectionString))
    {
        connection4.Open();
        SqlCommand command4 = connection4.CreateCommand();
        SqlTransaction transaction4 =
            connection4.BeginTransaction(IsolationLevel.ReadUncommitted);
        command4.Transaction = transaction4;
        command4.CommandText =
            "SELECT ID, valueCol FROM TestSnapshot";
        SqlDataReader reader4 = command4.ExecuteReader();
        while (reader4.Read())
        {
            Console.WriteLine("Expected 1,22 Actual "
                + reader4.GetValue(0).ToString()
                + "," + reader4.GetValue(1).ToString());
        }

        transaction4.Commit();
    }

    // Roll back the first transaction
    transaction1.Rollback();
}

// CLEANUP
// Delete the TestSnapshot table and set
// ALLOW_SNAPSHOT_ISOLATION OFF
using (SqlConnection connection5 = new SqlConnection(connectionString))
{
    connection5.Open();
    SqlCommand command5 = connection5.CreateCommand();
    command5.CommandText = "DROP TABLE TestSnapshot";
    SqlCommand command6 = connection5.CreateCommand();
    command6.CommandText =
        "ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION OFF";
    try
    {
        command5.ExecuteNonQuery();
        command6.ExecuteNonQuery();
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
    }
}
Console.WriteLine("Done!");

Example

The following example demonstrates the behavior of snapshot isolation when data is being modified. The code performs the following actions:

  • Connects to the AdventureWorks sample database and enables SNAPSHOT isolation.

  • Creates a table named TestSnapshotUpdate and inserts three rows of sample data.

  • Begins, but does not complete, sqlTransaction1 using SNAPSHOT isolation. Three rows of data are selected in the transaction.

  • Creates a second SqlConnection to AdventureWorks and creates a second transaction using the READ COMMITTED isolation level that updates a value in one of the rows selected in sqlTransaction1.

  • Commits sqlTransaction2.

  • Returns to sqlTransaction1 and attempts to update the same row that sqlTransaction1 already committed. Error 3960 is raised, and sqlTransaction1 is rolled back automatically. The SqlException.Number and SqlException.Message are displayed in the Console window.

  • Executes clean-up code to turn off snapshot isolation in AdventureWorks and delete the TestSnapshotUpdate table.

C#
// Assumes GetConnectionString returns a valid connection string
// where pooling is turned off by setting Pooling=False;. 
string connectionString = GetConnectionString();
using (SqlConnection connection1 = new SqlConnection(connectionString))
{
    connection1.Open();
    SqlCommand command1 = connection1.CreateCommand();

    // Enable Snapshot isolation in AdventureWorks
    command1.CommandText =
        "ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON";
    try
    {
        command1.ExecuteNonQuery();
        Console.WriteLine(
            "Snapshot Isolation turned on in AdventureWorks.");
    }
    catch (Exception ex)
    {
        Console.WriteLine("ALLOW_SNAPSHOT_ISOLATION ON failed: {0}", ex.Message);
    }
    // Create a table 
    command1.CommandText =
        "IF EXISTS "
        + "(SELECT * FROM sys.tables "
        + "WHERE name=N'TestSnapshotUpdate')"
        + " DROP TABLE TestSnapshotUpdate";
    command1.ExecuteNonQuery();
    command1.CommandText =
        "CREATE TABLE TestSnapshotUpdate "
        + "(ID int primary key, CharCol nvarchar(100));";
    try
    {
        command1.ExecuteNonQuery();
        Console.WriteLine("TestSnapshotUpdate table created.");
    }
    catch (Exception ex)
    {
        Console.WriteLine("CREATE TABLE failed: {0}", ex.Message);
    }
    // Insert some data
    command1.CommandText =
        "INSERT INTO TestSnapshotUpdate VALUES (1,N'abcdefg');"
        + "INSERT INTO TestSnapshotUpdate VALUES (2,N'hijklmn');"
        + "INSERT INTO TestSnapshotUpdate VALUES (3,N'opqrstuv');";
    try
    {
        command1.ExecuteNonQuery();
        Console.WriteLine("Data inserted TestSnapshotUpdate table.");
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
    }

    // Begin, but do not complete, a transaction 
    // using the Snapshot isolation level.
    SqlTransaction transaction1 = null;
    try
    {
        transaction1 = connection1.BeginTransaction(IsolationLevel.Snapshot);
        command1.CommandText =
            "SELECT * FROM TestSnapshotUpdate WHERE ID BETWEEN 1 AND 3";
        command1.Transaction = transaction1;
        command1.ExecuteNonQuery();
        Console.WriteLine("Snapshot transaction1 started.");

        // Open a second Connection/Transaction to update data
        // using ReadCommitted. This transaction should succeed.
        using (SqlConnection connection2 = new SqlConnection(connectionString))
        {
            connection2.Open();
            SqlCommand command2 = connection2.CreateCommand();
            command2.CommandText = "UPDATE TestSnapshotUpdate SET CharCol="
                + "N'New value from Connection2' WHERE ID=1";
            SqlTransaction transaction2 =
                connection2.BeginTransaction(IsolationLevel.ReadCommitted);
            command2.Transaction = transaction2;
            try
            {
                command2.ExecuteNonQuery();
                transaction2.Commit();
                Console.WriteLine(
                    "transaction2 has modified data and committed.");
            }
            catch (SqlException ex)
            {
                Console.WriteLine(ex.Message);
                transaction2.Rollback();
            }
            finally
            {
                transaction2.Dispose();
            }
        }

        // Now try to update a row in Connection1/Transaction1.
        // This transaction should fail because Transaction2
        // succeeded in modifying the data.
        command1.CommandText =
            "UPDATE TestSnapshotUpdate SET CharCol="
            + "N'New value from Connection1' WHERE ID=1";
        command1.Transaction = transaction1;
        command1.ExecuteNonQuery();
        transaction1.Commit();
        Console.WriteLine("You should never see this.");
    }
    catch (SqlException ex)
    {
        Console.WriteLine("Expected failure for transaction1:");
        Console.WriteLine("  {0}: {1}", ex.Number, ex.Message);
    }
    finally
    {
        transaction1.Dispose();
    }
}

// CLEANUP:
// Turn off Snapshot isolation and delete the table
using (SqlConnection connection3 = new SqlConnection(connectionString))
{
    connection3.Open();
    SqlCommand command3 = connection3.CreateCommand();
    command3.CommandText =
        "ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION OFF";
    try
    {
        command3.ExecuteNonQuery();
        Console.WriteLine(
            "CLEANUP: Snapshot isolation turned off in AdventureWorks.");
    }
    catch (Exception ex)
    {
        Console.WriteLine("CLEANUP FAILED: {0}", ex.Message);
    }
    command3.CommandText = "DROP TABLE TestSnapshotUpdate";
    try
    {
        command3.ExecuteNonQuery();
        Console.WriteLine("CLEANUP: TestSnapshotUpdate table deleted.");
    }
    catch (Exception ex)
    {
        Console.WriteLine("CLEANUP FAILED: {0}", ex.Message);
    }
}

Using Lock Hints with Snapshot Isolation

In the previous example, the first transaction selects data, and a second transaction updates the data before the first transaction is able to complete, causing an update conflict when the first transaction tries to update the same row. You can reduce the chance of update conflicts in long-running snapshot transactions by supplying lock hints at the beginning of the transaction. The following SELECT statement uses the UPDLOCK hint to lock the selected rows:

SQL
SELECT * FROM TestSnapshotUpdate WITH (UPDLOCK)   
  WHERE PriKey BETWEEN 1 AND 3  

Using the UPDLOCK lock hint blocks any rows attempting to update the rows before the first transaction completes. This guarantees that the selected rows have no conflicts when they are updated later in the transaction. See "Locking Hints" in SQL Server Books Online.

If your application has many conflicts, snapshot isolation may not be the best choice. Hints should only be used when really needed. Your application should not be designed so that it constantly relies on lock hints for its operation.

See also

SqlClient Support for High Availability, Disaster Recovery

This topic discusses SqlClient support (added in .NET Framework 4.5) for high-availability, disaster recovery -- AlwaysOn Availability Groups. AlwaysOn Availability Groups feature was added to SQL Server 2012. For more information about AlwaysOn Availability Groups, see SQL Server Books Online.

You can now specify the availability group listener of a (high-availability, disaster-recovery) availability group (AG) or SQL Server 2012 Failover Cluster Instance in the connection property. If a SqlClient application is connected to an AlwaysOn database that fails over, the original connection is broken and the application must open a new connection to continue work after the failover.

If you are not connecting to an availability group listener or SQL Server 2012 Failover Cluster Instance, and if multiple IP addresses are associated with a hostname, SqlClient will iterate sequentially through all IP addresses associated with DNS entry. This can be time consuming if the first IP address returned by DNS server is not bound to any network interface card (NIC). When connecting to an availability group listener or SQL Server 2012 Failover Cluster Instance, SqlClient attempts to establish connections to all IP addresses in parallel and if a connection attempt succeeds, the driver will discard any pending connection attempts.

Note

Increasing connection timeout and implementing connection retry logic will increase the probability that an application will connect to an availability group. Also, because a connection can fail because of a failover, you should implement connection retry logic, retrying a failed connection until it reconnects.

The following connection properties were added to SqlClient in .NET Framework 4.5:

  • ApplicationIntent

  • MultiSubnetFailover

You can programmatically modify these connection string keywords with:

  1. ApplicationIntent

  2. MultiSubnetFailover

Note

Setting MultiSubnetFailover to true isn't required with .NET Framework 4.6.1 or later versions.

Connecting With MultiSubnetFailover

Always specify MultiSubnetFailover=True when connecting to a SQL Server 2012 availability group listener or SQL Server 2012 Failover Cluster Instance. MultiSubnetFailover enables faster failover for all Availability Groups and or Failover Cluster Instance in SQL Server 2012 and will significantly reduce failover time for single and multi-subnet AlwaysOn topologies. During a multi-subnet failover, the client will attempt connections in parallel. During a subnet failover, will aggressively retry the TCP connection.

The MultiSubnetFailover connection property indicates that the application is being deployed in an availability group or SQL Server 2012 Failover Cluster Instance and that SqlClient will try to connect to the database on the primary SQL Server instance by trying to connect to all the IP addresses. When MultiSubnetFailover=True is specified for a connection, the client retries TCP connection attempts faster than the operating system’s default TCP retransmit intervals. This enables faster reconnection after failover of either an AlwaysOn Availability Group or an AlwaysOn Failover Cluster Instance, and is applicable to both single- and multi-subnet Availability Groups and Failover Cluster Instances.

For more information about connection string keywords in SqlClient, see ConnectionString.

Specifying MultiSubnetFailover=True when connecting to something other than a availability group listener or SQL Server 2012 Failover Cluster Instance may result in a negative performance impact, and is not supported.

Use the following guidelines to connect to a server in an availability group or SQL Server 2012 Failover Cluster Instance:

  • Use the MultiSubnetFailover connection property when connecting to a single subnet or multi-subnet; it will improve performance for both.

  • To connect to an availability group, specify the availability group listener of the availability group as the server in your connection string.

  • Connecting to a SQL Server instance configured with more than 64 IP addresses will cause a connection failure.

  • Behavior of an application that uses the MultiSubnetFailover connection property is not affected based on the type of authentication: SQL Server Authentication, Kerberos Authentication, or Windows Authentication.

  • Increase the value of Connect Timeout to accommodate for failover time and reduce application connection retry attempts.

  • Distributed transactions are not supported.

If read-only routing is not in effect, connecting to a secondary replica location will fail in the following situations:

  1. If the secondary replica location is not configured to accept connections.

  2. If an application uses ApplicationIntent=ReadWrite (discussed below) and the secondary replica location is configured for read-only access.

SqlDependency is not supported on read-only secondary replicas.

A connection will fail if a primary replica is configured to reject read-only workloads and the connection string contains ApplicationIntent=ReadOnly.

Upgrading to Use Multi-Subnet Clusters from Database Mirroring

A connection error (ArgumentException) will occur if the MultiSubnetFailover and Failover Partner connection keywords are present in the connection string, or if MultiSubnetFailover=True and a protocol other than TCP is used. An error (SqlException) will also occur if MultiSubnetFailover is used and the SQL Server returns a failover partner response indicating it is part of a database mirroring pair.

If you upgrade a SqlClient application that currently uses database mirroring to a multi-subnet scenario, you should remove the Failover Partner connection property and replace it with MultiSubnetFailover set to True and replace the server name in the connection string with an availability group listener. If a connection string uses Failover Partner and MultiSubnetFailover=True, the driver will generate an error. However, if a connection string uses Failover Partner and MultiSubnetFailover=False (or ApplicationIntent=ReadWrite), the application will use database mirroring.

The driver will return an error if database mirroring is used on the primary database in the AG, and if MultiSubnetFailover=True is used in the connection string that connects to a primary database instead of to an availability group listener.

Specifying Application Intent

When ApplicationIntent=ReadOnly, the client requests a read workload when connecting to an AlwaysOn enabled database. The server will enforce the intent at connection time and during a USE database statement but only to an Always On enabled database.

The ApplicationIntent keyword does not work with legacy, read-only databases.

A database can allow or disallow read workloads on the targeted AlwaysOn database. (This is done with the ALLOW_CONNECTIONS clause of the PRIMARY_ROLE and SECONDARY_ROLETransact-SQL statements.)

The ApplicationIntent keyword is used to enable read-only routing.

Read-Only Routing

Read-only routing is a feature that can ensure the availability of a read only replica of a database. To enable read-only routing:

  1. You must connect to an Always On Availability Group availability group listener.

  2. The ApplicationIntent connection string keyword must be set to ReadOnly.

  3. The Availability Group must be configured by the database administrator to enable read-only routing.

It is possible that multiple connections using read-only routing will not all connect to the same read-only replica. Changes in database synchronization or changes in the server's routing configuration can result in client connections to different read-only replicas. To ensure that all read-only requests connect to the same read-only replica, do not pass an availability group listener to the Data Source connection string keyword. Instead, specify the name of the read-only instance.

Read-only routing may take longer than connecting to the primary because read only routing first connects to the primary and then looks for the best available readable secondary. Because of this, you should increase your login timeout.

See also

SqlClient Support for LocalDB

Beginning in SQL Server code name Denali, a lightweight version of SQL Server, called LocalDB, will be available. This topic discusses how to connect to a LocalDB database.

Remarks

For more information about LocalDB, including how to install LocalDB and configure your LocalDB instance, see SQL Server Books Online.

To summarize what you can do with LocalDB:

  • Create and start LocalDB instances with sqllocaldb.exe or your app.config file.

  • Use sqlcmd.exe to add and modify databases in a LocalDB instance. For example, sqlcmd -S (localdb)\myinst.

  • Use the AttachDBFilename connection string keyword to add a database to your LocalDB instance. When using AttachDBFilename, if you do not specify the name of the database with the Database connection string keyword, the database will be removed from the LocalDB instance when the application closes.

  • Specify a LocalDB instance in your connection string. For example, your instance name is myInstance, the connection string would include:

  • server=(localdb)\\myInstance  
    

User Instance=True is not allowed when connecting to a LocalDB database.

You can download LocalDB from Microsoft SQL Server 2012 Feature Pack. If you will use sqlcmd.exe to modify data in your LocalDB instance, you will need sqlcmd from SQL Server 2012, which you can also get from the SQL Server 2012 Feature Pack.

Programmatically Create a Named Instance

An application can create a named instance and specify a database as follows:

  • Specify the LocalDB instances to create in the app.config file, as follows. The version number of the instance should be the same as the version number of your LocalDB installation.

    XML
  • <?xml version="1.0" encoding="utf-8" ?>  
    <configuration>  
      <configSections>  
        <section  
        name="system.data.localdb"  
        type="System.Data.LocalDBConfigurationSection,System.Data,Version=4.0.0.0,Culture=neutral,PublicKeyToken=b77a5c561934e089"/>  
      </configSections>  
      <system.data.localdb>  
        <localdbinstances>  
          <add name="myInstance" version="11.0" />  
        </localdbinstances>  
      </system.data.localdb>  
    </configuration>  
    
  • Specify the name of the instance using the server connection string keyword. The instance name specified in the server connection string keyword must match the name specified in the app.config file.

  • Use the AttachDBFilename connection string keyword to specify the .MDF file.

See also

LINQ to SQL

LINQ to SQL is a component of .NET Framework version 3.5 that provides a run-time infrastructure for managing relational data as objects.

Note

Relational data appears as a collection of two-dimensional tables (relations or flat files), where common columns relate tables to each other. To use LINQ to SQL effectively, you must have some familiarity with the underlying principles of relational databases.

In LINQ to SQL, the data model of a relational database is mapped to an object model expressed in the programming language of the developer. When the application runs, LINQ to SQL translates into SQL the language-integrated queries in the object model and sends them to the database for execution. When the database returns the results, LINQ to SQL translates them back to objects that you can work with in your own programming language.

Developers using Visual Studio typically use the Object Relational Designer, which provides a user interface for implementing many of the features of LINQ to SQL.

The documentation that is included with this release of LINQ to SQL describes the basic building blocks, processes, and techniques you need for building LINQ to SQL applications. You can also search Microsoft Docs for specific issues, and you can participate in the LINQ Forum, where you can discuss more complex topics in detail with experts. Finally, the LINQ to SQL: .NET Language-Integrated Query for Relational Data white paper details LINQ to SQL technology, complete with Visual Basic and C# code examples.

In This Section

Getting Started
Provides a condensed overview of LINQ to SQL along with information about how to get started using LINQ to SQL.

Programming Guide
Provides steps for mapping, querying, updating, debugging, and similar tasks.

Reference
Provides reference information about several aspects of LINQ to SQL. Topics include SQL-CLR Type Mapping, Standard Query Operator Translation, and more.

Samples
Provides links to Visual Basic and C# samples.

Related Sections

Language-Integrated Query (LINQ) - C#
Provides overviews of LINQ technologies in C#.

Language-Integrated Query (LINQ) - Visual Basic
Provides overviews of LINQ technologies in Visual Basic.

LINQ
Describes LINQ technologies for Visual Basic users.

LINQ and ADO.NET
Links to the ADO.NET portal.

LINQ to SQL Walkthroughs
Lists walkthroughs available for LINQ to SQL.

Downloading Sample Databases
Describes how to download sample databases used in the documentation.

LinqDataSource Web Server Control Overview
Describes how the LinqDataSource control exposes Language-Integrated Query (LINQ) to Web developers through the ASP.NET data-source control architecture.

Getting Started

By using LINQ to SQL, you can use the LINQ technology to access SQL databases just as you would access an in-memory collection.

For example, the nw object in the following code is created to represent the Northwind database, the Customers table is targeted, the rows are filtered for Customers from London, and a string for CompanyName is selected for retrieval.

When the loop is executed, the collection of CompanyName values is retrieved.

C#
// Northwnd inherits from System.Data.Linq.DataContext.
Northwnd nw = new Northwnd(@"northwnd.mdf");
// or, if you are not using SQL Server Express
// Northwnd nw = new Northwnd("Database=Northwind;Server=server_name;Integrated Security=SSPI");

var companyNameQuery =
    from cust in nw.Customers
    where cust.City == "London"
    select cust.CompanyName;

foreach (var customer in companyNameQuery)
{
    Console.WriteLine(customer);
}

Next Steps

For some additional examples, including inserting and updating, see What You Can Do With LINQ to SQL.

Next, try some walkthroughs and tutorials to have a hands-on experience of using LINQ to SQL. See Learning by Walkthroughs.

Finally, learn how to get started on your own LINQ to SQL project by reading Typical Steps for Using LINQ to SQL.

See also

What You Can Do With LINQ to SQL

LINQ to SQL supports all the key capabilities you would expect as a SQL developer. You can query for information, and insert, update, and delete information from tables.

Selecting

Selecting (projection) is achieved by just writing a LINQ query in your own programming language, and then executing that query to retrieve the results. LINQ to SQL itself translates all the necessary operations into the necessary SQL operations that you are familiar with. For more information, see LINQ to SQL.

In the following example, the company names of customers from London are retrieved and displayed in the console window.

C#
// Northwnd inherits from System.Data.Linq.DataContext.
Northwnd nw = new Northwnd(@"northwnd.mdf");
// or, if you are not using SQL Server Express
// Northwnd nw = new Northwnd("Database=Northwind;Server=server_name;Integrated Security=SSPI");

var companyNameQuery =
    from cust in nw.Customers
    where cust.City == "London"
    select cust.CompanyName;

foreach (var customer in companyNameQuery)
{
    Console.WriteLine(customer);
}

Inserting

To execute a SQL Insert, just add objects to the object model you have created, and call SubmitChanges on the DataContext.

In the following example, a new customer and information about the customer is added to the Customers table by using InsertOnSubmit.

C#
// Northwnd inherits from System.Data.Linq.DataContext.
Northwnd nw = new Northwnd(@"northwnd.mdf");

Customer cust = new Customer();
cust.CompanyName = "SomeCompany";
cust.City = "London";
cust.CustomerID = "98128";
cust.PostalCode = "55555";
cust.Phone = "555-555-5555";
nw.Customers.InsertOnSubmit(cust);

// At this point, the new Customer object is added in the object model.
// In LINQ to SQL, the change is not sent to the database until
// SubmitChanges is called.
nw.SubmitChanges();

Updating

To Update a database entry, first retrieve the item and edit it directly in the object model. After you have modified the object, call SubmitChanges on the DataContext to update the database.

In the following example, all customers who are from London are retrieved. Then the name of the city is changed from "London" to "London - Metro". Finally, SubmitChanges is called to send the changes to the database.

C#
Northwnd nw = new Northwnd(@"northwnd.mdf");

var cityNameQuery =
    from cust in nw.Customers
    where cust.City.Contains("London")
    select cust;

foreach (var customer in cityNameQuery)
{
    if (customer.City == "London")
    {
        customer.City = "London - Metro";
    }
}
nw.SubmitChanges();

Deleting

To Delete an item, remove the item from the collection to which it belongs, and then call SubmitChanges on the DataContext to commit the change.

Note

LINQ to SQL does not recognize cascade-delete operations. If you want to delete a row in a table that has constraints against it, see How to: Delete Rows From the Database.

In the following example, the customer who has CustomerID of 98128 is retrieved from the database. Then, after confirming that the customer row was retrieved, DeleteOnSubmit is called to remove that object from the collection. Finally, SubmitChanges is called to forward the deletion to the database.

C#
Northwnd nw = new Northwnd(@"northwnd.mdf");
var deleteIndivCust =
    from cust in nw.Customers
    where cust.CustomerID == "98128"
    select cust;

if (deleteIndivCust.Count() > 0)
{
    nw.Customers.DeleteOnSubmit(deleteIndivCust.First());
    nw.SubmitChanges();
}

See also

Typical Steps for Using LINQ to SQL

To implement a LINQ to SQL application, you follow the steps described later in this topic. Note that many steps are optional. It is very possible that you can use your object model in its default state.

For a really fast start, use the Object Relational Designer to create your object model and start coding your queries.

Creating the Object Model

The first step is to create an object model from the metadata of an existing relational database. The object model represents the database according to the programming language of the developer. For more information, see The LINQ to SQL Object Model.

1. Select a tool to create the model.

Three tools are available for creating the model.

  • The Object Relational Designer

    This designer provides a rich user interface for creating an object model from an existing database. This tool is part of the Visual Studio IDE, and is best suited to small or medium databases.

  • The SQLMetal code-generation tool

    This command-line utility provides a slightly different set of options from the O/R Designer. Modeling large databases is best done by using this tool. For more information, see SqlMetal.exe (Code Generation Tool).

  • A code editor

    You can write your own code by using either the Visual Studio code editor or another editor. We do not recommend this approach, which can be prone to errors, when you have an existing database and can use either the O/R Designer or the SQLMetal tool. However, the code editor can be valuable for refining or modifying code you have already generated by using other tools. For more information, see How to: Customize Entity Classes by Using the Code Editor.

2. Select the kind of code you want to generate.

  • A C# or Visual Basic source code file for attribute-based mapping.

    You then include this code file in your Visual Studio project. For more information, see Attribute-Based Mapping.

  • An XML file for external mapping.

    By using this approach, you can keep the mapping metadata out of your application code. For more information, see External Mapping.

    Note

    The O/R Designer does not support the generation of external mapping files. You must use the SQLMetal tool to implement this feature.

  • A DBML file, which you can modify before generating a final code file.

    This is an advanced feature.

3. Refine the code file to reflect the needs of your application.

For this purpose, you can use either the O/R Designer or the code editor.

Using the Object Model

The following illustration shows the relationship between the developer and the data in a two-tier scenario. For other scenarios, see N-Tier and Remote Applications with LINQ to SQL.

Screenshot that shows the Linq Object Model.

Now that you have the object model, you describe information requests and manipulate data within that model. You think in terms of the objects and properties in your object model and not in terms of the rows and columns of the database. You do not deal directly with the database.

When you instruct LINQ to SQL to either execute a query that you have described or call SubmitChanges() on data that you have manipulated, LINQ to SQL communicates with the database in the language of the database.

The following represents typical steps for using the object model that you have created.

1. Create queries to retrieve information from the database.

For more information, see Query Concepts and Query Examples.

2. Override default behaviors for Insert, Update, and Delete.

This step is optional. For more information, see Customizing Insert, Update, and Delete Operations.

3. Set appropriate options to detect and report concurrency conflicts.

You can leave your model with its default values for handling concurrency conflicts, or you can change it to suit your purposes. For more information, see How to: Specify Which Members are Tested for Concurrency Conflicts and How to: Specify When Concurrency Exceptions are Thrown.

4. Establish an inheritance hierarchy.

This step is optional. For more information, see Inheritance Support.

5. Provide an appropriate user interface.

This step is optional, and depends on how your application will be used.

6. Debug and test your application.

For more information, see Debugging Support.

See also

Get the sample databases for ADO.NET code samples

A number of examples and walkthroughs in the LINQ to SQL documentation use sample SQL Server databases and SQL Server Express. You can download these products free of charge from Microsoft.

Get the Northwind sample database for SQL Server

Download the script instnwnd.sql from the following GitHub repository to create and load the Northwind sample database for SQL Server:

Northwind and pubs sample databases for Microsoft SQL Server

Before you can use the Northwind database, you have to run the downloaded instnwnd.sql script file to recreate the database on an instance of SQL Server by using SQL Server Management Studio or a similar tool. Follow the instructions in the Readme file in the repository.

Tip

If you're looking for the Northwind database for Microsoft Access, see Install the Northwind sample database for Microsoft Access.

Get the Northwind sample database for Microsoft Access

The Northwind sample database for Microsoft Access is not available on the Microsoft Download Center. To install Northwind directly from within Access, do the following things:

  1. Open Access.

  2. Enter Northwind in the Search for Online Templates box, and then select Enter.

  3. On the results screen, select Northwind. A new window opens with a description of the Northwind database.

  4. In the new window, in the File Name text box, provide a filename for your copy of the Northwind database.

  5. Select Create. Access downloads the Northwind database and prepares the file.

  6. When this process is complete, the database opens with a Welcome screen.

Get the AdventureWorks sample database for SQL Server

Download the AdventureWorks sample database for SQL Server from the following GitHub repository:

AdventureWorks sample databases

After you download one of the database backup (*.bak) files, restore the backup to an instance of SQL Server by using SQL Server Management Studio (SSMS). See Get SQL Server Management Studio.

Get SQL Server Management Studio

If you want to view or modify a database that you've downloaded, you can use SQL Server Management Studio (SSMS). Download SSMS from the following page:

Download SQL Server Management Studio (SSMS)

You can also view and manage databases in the Visual Studio integrated development environment (IDE). In Visual Studio, connect to the database from SQL Server Object Explorer, or create a Data Connection to the database in Server Explorer. Open these explorer panes from the View menu.

Get SQL Server Express

SQL Server Express is a free, entry-level edition of SQL Server that you can redistribute with applications. Download SQL Server Express from the following page:

SQL Server Express Edition

If you're using Visual Studio, SQL Server Express LocalDB is included in the free Community edition of Visual Studio, as well as the Professional and higher editions.

See also

Learning by Walkthroughs

The LINQ to SQL documentation provides several walkthroughs. This topic addresses some general walkthrough issues (including troubleshooting), and provides links to several entry-level walkthroughs for learning about LINQ to SQL.

Note

The walkthroughs in this Getting Started section expose you to the basic code that supports LINQ to SQL technology. In actual practice, you will typically use the Object Relational Designer and Windows Forms projects to implement your LINQ to SQL applications. The O/R Designer documentation provides examples and walkthroughs for this purpose.

Getting Started Walkthroughs

Several walkthroughs are available in this section. These walkthroughs are based on the sample Northwind database, and present LINQ to SQL features at a gentle pace with minimal complexities.

A typical progression to follow would be as follows:

Objective Visual Basic C#
Create an entity class and execute a simple query. Walkthrough: Simple Object Model and Query (Visual Basic) Walkthrough: Simple Object Model and Query (C#)
Add a second class and execute a more complex query.

(Requires completion of previous walkthrough).
Walkthrough: Querying Across Relationships (Visual Basic) Walkthrough: Querying Across Relationships (C#)
Add, change, and delete items in the database. Walkthrough: Manipulating Data (Visual Basic) Walkthrough: Manipulating Data (C#)
Use stored procedures. Walkthrough: Using Only Stored Procedures (Visual Basic) Walkthrough: Using Only Stored Procedures (C#)

General

The following information pertains to these walkthroughs in general:

  • Environment: Each LINQ to SQL walkthrough uses Visual Studio as its integrated development environment (IDE).

  • SQL engines: These walkthroughs are written to be implemented by using SQL Server Express. If you do not have SQL Server Express, you can download it free of charge. For more information, see Downloading Sample Databases.

    Note

    LINQ to SQL walkthroughs use a file name as a connection string. Simply specifying a file name is a convenience that LINQ to SQL provides for SQL Server Express users. Always pay attention to security issues. For more information, see Security in LINQ to SQL.

  • LINQ to SQL walkthroughs typically require the Northwind sample database. For more information, see Downloading Sample Databases.

  • The dialog boxes and menu commands you see in walkthroughs might differ from those described in Help, depending on your active settings or Visual Studio edition. To change your settings, click Import and Export Settings on the Tools menu. For more information, see Personalize the Visual Studio IDE.

  • For walkthroughs that address multi-tier scenarios, a server must be located on a computer that is distinct from the development computer, and you must have appropriate permissions to access the server.

  • The name of the class that typically represents the Orders table in the Northwind sample database is [Order]. The escaping is required because Order is a keyword in Visual Basic.

Troubleshooting

Run-time errors can occur because you do not have sufficient permissions to access the databases used in these walkthroughs. See the following steps to help resolve the most common of these issues.

Log-On Issues

Your application might be trying to access the database by way of a database logon it does not accept.

To verify or change the database log on
  1. On the Windows Start menu, point to All Programs, Microsoft SQL Server 2005, point to Configuration Tools, and then click SQL Server Configuration Manager.

  2. In the left pane of the SQL Server Configuration Manager, click SQL Server 2005 Services.

  3. In the right pane, right-click SQL Server (SQLEXPRESS), and then click Properties.

  4. Click the Log On tab and verify how you are trying to log on to the server.

    In most cases, Local System works.

    If you make a change, click Restart to restart the service.

Protocols

At times, protocols might not be set correctly for your application to access the database. For example, the Named Pipes protocol, which is required for walkthroughs in LINQ to SQL, is not enabled by default.

To enable the Named Pipes protocol
  1. In the left pane of the SQL Server Configuration Manager, expand SQL Server 2005 Network Configuration, and then click Protocols for SQLEXPRESS.

  2. In the right pane, make sure that the Named Pipes protocol is enabled. If it is not, right-click Name Pipes and then click Enable.

    You will have to stop and restart the service. Follow the steps in the next block.

Stopping and Restarting the Service

You must stop and restart services before your changes can take effect.

To stop and restart the service
  1. In the left pane of the SQL Server Configuration Manager, click SQL Server 2005 Services.

  2. In the right pane, right-click SQL Server (SQLEXPRESS), and then click Stop.

  3. Right-click SQL Server (SQLEXPRESS), and then click Restart.

See also

Walkthrough: Simple Object Model and Query (Visual Basic)

This walkthrough provides a fundamental end-to-end LINQ to SQL scenario with minimal complexities. You will create an entity class that models the Customers table in the sample Northwind database. You will then create a simple query to list customers who are located in London.

This walkthrough is code-oriented by design to help show LINQ to SQL concepts. Normally speaking, you would use the Object Relational Designer to create your object model.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

This walkthrough was written by using Visual Basic Development Settings.

Prerequisites

  • This walkthrough uses a dedicated folder ("c:\linqtest") to hold files. Create this folder before you begin the walkthrough.

  • This walkthrough requires the Northwind sample database. If you do not have this database on your development computer, you can download it from the Microsoft download site. For instructions, see Downloading Sample Databases. After you have downloaded the database, copy the file to the c:\linqtest folder.

Overview

This walkthrough consists of six main tasks:

  • Creating a LINQ to SQL solution in Visual Studio.

  • Mapping a class to a database table.

  • Designating properties on the class to represent database columns.

  • Specifying the connection to the Northwind database.

  • Creating a simple query to run against the database.

  • Executing the query and observing the results.

Creating a LINQ to SQL Solution

In this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project.

To create a LINQ to SQL solution

  1. On the File menu, click New Project.

  2. In the Project types pane of the New Project dialog box, click Visual Basic.

  3. In the Templates pane, click Console Application.

  4. In the Name box, type LinqConsoleApp.

  5. Click OK.

Adding LINQ References and Directives

This walkthrough uses assemblies that might not be installed by default in your project. If System.Data.Linq is not listed as a reference in your project (click Show All Files in Solution Explorer and expand the References node), add it, as explained in the following steps.

To add System.Data.Linq

  1. In Solution Explorer, right-click References, and then click Add Reference.

  2. In the Add Reference dialog box, click .NET, click the System.Data.Linq assembly, and then click OK.

    The assembly is added to the project.

  3. Also in the Add Reference dialog box, click .NET, scroll to and click System.Windows.Forms, and then click OK.

    This assembly, which supports the message box in the walkthrough, is added to the project.

  4. Add the following directives above Module1:

    VB
  1. Imports System.Data.Linq
    Imports System.Data.Linq.Mapping
    Imports System.Windows.Forms
    

Mapping a Class to a Database Table

In this step, you create a class and map it to a database table. Such a class is termed an entity class. Note that the mapping is accomplished by just adding the TableAttribute attribute. The Name property specifies the name of the table in the database.

To create an entity class and map it to a database table

  • Type or paste the following code into Module1.vb immediately above Sub Main:

    VB
  • <Table(Name:="Customers")> _
    Public Class Customer
    End Class
    

Designating Properties on the Class to Represent Database Columns

In this step, you accomplish several tasks.

  • You use the ColumnAttribute attribute to designate CustomerID and City properties on the entity class as representing columns in the database table.

  • You designate the CustomerID property as representing a primary key column in the database.

  • You designate _CustomerID and _City fields for private storage. LINQ to SQL can then store and retrieve values directly, instead of using public accessors that might include business logic.

To represent characteristics of two database columns

  • Type or paste the following code into Module1.vb just before End Class:

    VB
  • Private _CustomerID As String
    <Column(IsPrimaryKey:=True, Storage:="_CustomerID")> _
    Public Property CustomerID() As String
        Get
            Return Me._CustomerID
        End Get
        Set(ByVal value As String)
            Me._CustomerID = value
        End Set
    End Property
    
    Private _City As String
    <Column(Storage:="_City")> _
    Public Property City() As String
        Get
            Return Me._City
        End Get
        Set(ByVal value As String)
            Me._City = value
        End Set
    End Property
    

Specifying the Connection to the Northwind Database

In this step you use a DataContext object to establish a connection between your code-based data structures and the database itself. The DataContext is the main channel through which you retrieve objects from the database and submit changes.

You also declare a Table(Of Customer) to act as the logical, typed table for your queries against the Customers table in the database. You will create and execute these queries in later steps.

To specify the database connection

  • Type or paste the following code into the Sub Main method.

    Note that the northwnd.mdf file is assumed to be in the linqtest folder. For more information, see the Prerequisites section earlier in this walkthrough.

    VB
  • ' Use a connection string.
    Dim db As New DataContext _
        ("c:\linqtest\northwnd.mdf")
    
    ' Get a typed table to run queries.
    Dim Customers As Table(Of Customer) = _
        db.GetTable(Of Customer)()
    

Creating a Simple Query

In this step, you create a query to find which customers in the database Customers table are located in London. The query code in this step just describes the query. It does not execute it. This approach is known as deferred execution. For more information, see Introduction to LINQ Queries (C#).

You will also produce a log output to show the SQL commands that LINQ to SQL generates. This logging feature (which uses Log) is helpful in debugging, and in determining that the commands being sent to the database accurately represent your query.

To create a simple query

  • Type or paste the following code into the Sub Main method after the Table(Of Customer) declaration:

    VB
  • ' Attach the log to show generated SQL in a console window.
    db.Log = Console.Out
    
    ' Query for customers in London.
    Dim custQuery = _
        From cust In Customers _
        Where cust.City = "London" _
        Select cust
    

Executing the Query

In this step, you actually execute the query. The query expressions you created in the previous steps are not evaluated until the results are needed. When you begin the For Each iteration, a SQL command is executed against the database and objects are materialized.

To execute the query

  1. Type or paste the following code at the end of the Sub Main method (after the query description):

    VB
  1. ' Format the message box.
    Dim msg As String = "", title As String = "London customers:", _
        response As MsgBoxResult, style As MsgBoxStyle = _
        MsgBoxStyle.Information
    
    ' Execute the query.
    For Each custObj In custQuery
        msg &= String.Format(custObj.CustomerID & vbCrLf)
    Next
    
    ' Display the results.
    response = MsgBox(msg, style, title)
    
  2. Press F5 to debug the application.

    Note

    If your application generates a run-time error, see the Troubleshooting section of Learning by Walkthroughs.

    The message box displays a list of six customers. The Console window displays the generated SQL code.

  3. Click OK to dismiss the message box.

    The application closes.

  4. On the File menu, click Save All.

    You will need this application if you continue with the next walkthrough.

Next Steps

The Walkthrough: Querying Across Relationships (Visual Basic) topic continues where this walkthrough ends. The Querying Across Relationships walkthrough demonstrates how LINQ to SQL can query across tables, similar to joins in a relational database.

If you want to do the Querying Across Relationships walkthrough, make sure to save the solution for the walkthrough you have just completed, which is a prerequisite.

See also

Walkthrough: Querying Across Relationships (Visual Basic)

This walkthrough demonstrates the use of LINQ to SQL associations to represent foreign-key relationships in the database.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

This walkthrough was written by using Visual Basic Development Settings.

Prerequisites

You must have completed Walkthrough: Simple Object Model and Query (Visual Basic). This walkthrough builds on that one, including the presence of the northwnd.mdf file in c:\linqtest.

Overview

This walkthrough consists of three main tasks:

  • Adding an entity class to represent the Orders table in the sample Northwind database.

  • Supplementing annotations to the Customer class to enhance the relationship between the Customer and Order classes.

  • Creating and running a query to test the process of obtaining Order information by using the Customer class.

Mapping Relationships across Tables

After the Customer class definition, create the Order entity class definition that includes the following code, which indicates that Orders.Customer relates as a foreign key to Customers.CustomerID.

To add the Order entity class

  • Type or paste the following code after the Customer class:

    VB
  • <Table(Name:="Orders")> _
    Public Class Order
        Private _OrderID As Integer
        Private _CustomerID As String
        Private _Customers As EntityRef(Of Customer)
    
        Public Sub New()
            Me._Customers = New EntityRef(Of Customer)()
        End Sub
    
        <Column(Storage:="_OrderID", DbType:="Int NOT NULL IDENTITY", _
            IsPrimaryKey:=True, IsDBGenerated:=True)> _
        Public ReadOnly Property OrderID() As Integer
            Get
                Return Me._OrderID
            End Get
        End Property
        ' No need to specify a setter because IsDBGenerated is true.
    
        <Column(Storage:="_CustomerID", DbType:="NChar(5)")> _
        Public Property CustomerID() As String
            Get
                Return Me._CustomerID
            End Get
            Set(ByVal value As String)
                Me._CustomerID = value
            End Set
        End Property
    
        <Association(Storage:="_Customers", ThisKey:="CustomerID")> _
        Public Property Customers() As Customer
            Get
                Return Me._Customers.Entity
            End Get
            Set(ByVal value As Customer)
                Me._Customers.Entity = value
            End Set
        End Property
    End Class
    

Annotating the Customer Class

In this step, you annotate the Customer class to indicate its relationship to the Order class. (This addition is not strictly necessary, because defining the relationship in either direction is sufficient to create the link. But adding this annotation does enable you to easily navigate objects in either direction.)

To annotate the Customer class

  • Type or paste the following code into the Customer class:

    VB
  • Private _Orders As EntitySet(Of Order)
    
    Public Sub New()
        Me._Orders = New EntitySet(Of Order)()
    End Sub
    
    <Association(Storage:="_Orders", OtherKey:="CustomerID")> _
    Public Property Orders() As EntitySet(Of Order)
        Get
            Return Me._Orders
        End Get
        Set(ByVal value As EntitySet(Of Order))
            Me._Orders.Assign(value)
        End Set
    End Property
    

Creating and Running a Query across the Customer-Order Relationship

You can now access Order objects directly from the Customer objects, or in the opposite order. You do not need an explicit join between customers and orders.

To access Order objects by using Customer objects

  1. Modify the Sub Main method by typing or pasting the following code into the method:

    VB
  1. ' Query for customers who have no orders.
    Dim custQuery = _
        From cust In Customers _
        Where Not cust.Orders.Any() _
        Select cust
    
    Dim msg As String = "", title As String = _
        "Customers With No Orders", response As MsgBoxResult, _
        style As MsgBoxStyle = MsgBoxStyle.Information
    
    For Each custObj In custQuery
        msg &= String.Format(custObj.CustomerID & vbCrLf)
    Next
    response = MsgBox(msg, style, title)
    
  2. Press F5 to debug your application.

    Two names appear in the message box, and the Console window shows the generated SQL code.

  3. Close the message box to stop debugging.

Creating a Strongly Typed View of Your Database

It is much easier to start with a strongly typed view of your database. By strongly typing the DataContext object, you do not need calls to GetTable. You can use strongly typed tables in all your queries when you use the strongly typed DataContext object.

In the following steps, you will create Customers as a strongly typed table that maps to the Customers table in the database.

To strongly type the DataContext object

  1. Add the following code above the Customer class declaration.

    VB
    Public Class Northwind
        Inherits DataContext
        ' Table(Of T) abstracts database details  per
        ' table/data type.
        Public Customers As Table(Of Customer)
        Public Orders As Table(Of Order)
    
        Public Sub New(ByVal connection As String)
            MyBase.New(connection)
        End Sub
    End Class
    
  2. Modify Sub Main to use the strongly typed DataContext as follows:

    VB
    ' Use a connection string.
    Dim db As New Northwind _
        ("C:\linqtest\northwnd.mdf")
    
    ' Query for customers from Seattle.
    Dim custs = _
        From cust In db.Customers _
        Where cust.City = "Seattle" _
        Select cust
    
    For Each custObj In custs
        Console.WriteLine("ID=" & custObj.CustomerID)
    Next
    
    ' Freeze the console window.
    Console.ReadLine()
    
  3. Press F5 to debug your application.

    The Console window output is:

    ID=WHITC

  4. Press Enter in the Console window to close the application.

  5. On the File menu, click Save All if you want to save this application.

Next Steps

The next walkthrough (Walkthrough: Manipulating Data (Visual Basic)) demonstrates how to manipulate data. That walkthrough does not require that you save the two walkthroughs in this series that you have already completed.

See also

Walkthrough: Manipulating Data (Visual Basic)

This walkthrough provides a fundamental end-to-end LINQ to SQL scenario for adding, modifying, and deleting data in a database. You will use a copy of the sample Northwind database to add a customer, change the name of a customer, and delete an order.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

This walkthrough was written by using Visual Basic Development Settings.

Prerequisites

This walkthrough requires the following:

  • This walkthrough uses a dedicated folder ("c:\linqtest2") to hold files. Create this folder before you begin the walkthrough.

  • The Northwind sample database.

    If you do not have this database on your development computer, you can download it from the Microsoft download site. For instructions, see Downloading Sample Databases. After you have downloaded the database, copy the northwnd.mdf file to the c:\linqtest2 folder.

  • A Visual Basic code file generated from the Northwind database.

    You can generate this file by using either the Object Relational Designer or the SQLMetal tool. This walkthrough was written by using the SQLMetal tool with the following command line:

    sqlmetal /code:"c:\linqtest2\northwind.vb" /language:vb "C:\linqtest2\northwnd.mdf" /pluralize

    For more information, see SqlMetal.exe (Code Generation Tool).

Overview

This walkthrough consists of six main tasks:

  • Creating the LINQ to SQL solution in Visual Studio.

  • Adding the database code file to the project.

  • Creating a new customer object.

  • Modifying the contact name of a customer.

  • Deleting an order.

  • Submitting these changes to the Northwind database.

Creating a LINQ to SQL Solution

In this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project.

To create a LINQ to SQL solution

  1. On the Visual Studio File menu, click New Project.

  2. In the Project types pane in the New Project dialog box, click Visual Basic.

  3. In the Templates pane, click Console Application.

  4. In the Name box, type LinqDataManipulationApp.

  5. Click OK.

Adding LINQ References and Directives

This walkthrough uses assemblies that might not be installed by default in your project. If System.Data.Linq is not listed as a reference in your project (click Show All Files in Solution Explorer and expand the References node), add it, as explained in the following steps.

To add System.Data.Linq

  1. In Solution Explorer, right-click References, and then click Add Reference.

  2. In the Add Reference dialog box, click .NET, click the System.Data.Linq assembly, and then click OK.

    The assembly is added to the project.

  3. In the code editor, add the following directives above Module1:

    VB
    Imports System.Data.Linq
    Imports System.Data.Linq.Mapping
    

Adding the Northwind Code File to the Project

These steps assume that you have used the SQLMetal tool to generate a code file from the Northwind sample database. For more information, see the Prerequisites section earlier in this walkthrough.

To add the northwind code file to the project

  1. On the Project menu, click Add Existing Item.

  2. In the Add Existing Item dialog box, navigate to c:\linqtest2\northwind.vb, and then click Add.

    The northwind.vb file is added to the project.

Setting Up the Database Connection

First, test your connection to the database. Note especially that the name of the database, Northwnd, has no i character. If you generate errors in the next steps, review the northwind.vb file to determine how the Northwind partial class is spelled.

To set up and test the database connection

  1. Type or paste the following code into Sub Main:

    VB
    ' Use a connection string, but connect to
    '     the temporary copy of the database.
    Dim db As New Northwnd _
        ("C:\linqtest2\northwnd.mdf")
    
    ' Keep the console window open after activity stops.
    Console.ReadLine()
    
  2. Press F5 to test the application at this point.

    A Console window opens.

    Close the application by pressing Enter in the Console window, or by clicking Stop Debugging on the Visual Studio Debug menu.

Creating a New Entity

Creating a new entity is straightforward. You can create objects (such as Customer) by using the New keyword.

In this and the following sections, you are making changes only to the local cache. No changes are sent to the database until you call SubmitChanges toward the end of this walkthrough.

To add a new Customer entity object

  1. Create a new Customer by adding the following code before Console.ReadLine in Sub Main:

    VB
    ' Create the new Customer object.
    Dim newCust As New Customer()
    newCust.CompanyName = "AdventureWorks Cafe"
    newCust.CustomerID = "A3VCA"
    
    ' Add the customer to the Customers table.
    db.Customers.InsertOnSubmit(newCust)
    
    Console.WriteLine("Customers matching CA before insert:")
    
    Dim custQuery = _
        From cust In db.Customers _
        Where cust.CustomerID.Contains("CA") _
        Select cust
    
    For Each cust In custQuery
        Console.WriteLine("Customer ID: " & cust.CustomerID)
    Next
    
  2. Press F5 to debug the solution.

    The results that are shown in the console window are as follows:

    Customers matching CA before insert:

    Customer ID: CACTU

    Customer ID: RICAR

    Note that the new row does not appear in the results. The new data has not yet been submitted to the database.

  3. Press Enter in the Console window to stop debugging.

Updating an Entity

In the following steps, you will retrieve a Customer object and modify one of its properties.

To change the name of a Customer

  • Add the following code above Console.ReadLine():

    VB
    Dim existingCust = _
        (From cust In db.Customers _
        Where cust.CustomerID = "ALFKI" _
        Select cust).First()
    
    ' Change the contact name of the customer.
    existingCust.ContactName = "New Contact"
    

Deleting an Entity

Using the same customer object, you can delete the first order.

The following code demonstrates how to sever relationships between rows, and how to delete a row from the database.

To delete a row

  • Add the following code just above Console.ReadLine():

    VB
    ' Access the first element in the Orders collection.
    Dim ord0 As Order = existingCust.Orders(0)
    
    ' Access the first element in the OrderDetails collection.
    Dim detail0 As OrderDetail = ord0.OrderDetails(0)
    
    ' Display the order to be deleted.
    Console.WriteLine _
        (vbCrLf & "The Order Detail to be deleted is: OrderID = " _
        & detail0.OrderID)
    
    ' Mark the Order Detail row for deletion from the database.
    db.OrderDetails.DeleteOnSubmit(detail0)
    

Submitting Changes to the Database

The final step required for creating, updating, and deleting objects is to actually submit the changes to the database. Without this step, your changes are only local and will not appear in query results.

To submit changes to the database

  1. Insert the following code just above Console.ReadLine:

    VB
    db.SubmitChanges()
    
  2. Insert the following code (after SubmitChanges) to show the before and after effects of submitting the changes:

    VB
    Console.WriteLine(vbCrLf & "Customers matching CA after update:")
    Dim finalQuery = _
        From cust In db.Customers _
        Where cust.CustomerID.Contains("CA") _
        Select cust
    
    For Each cust In finalQuery
        Console.WriteLine("Customer ID: " & cust.CustomerID)
    Next
    
  3. Press F5 to debug the solution.

    The console window appears as follows:

    Customers matching CA before update:  
    Customer ID: CACTU  
    Customer ID: RICAR  
    
    The Order Detail to be deleted is: OrderID = 10643  
    
    Customers matching CA after update:  
    Customer ID: A3VCA  
    Customer ID: CACTU  
    Customer ID: RICAR  
    
  4. Press Enter in the Console window to stop debugging.

Note

After you have added the new customer by submitting the changes, you cannot execute this solution again as is, because you cannot add the same customer again as is. To execute the solution again, change the value of the customer ID to be added.

See also

Walkthrough: Using Only Stored Procedures (Visual Basic)

This walkthrough provides a basic end-to-end LINQ to SQL scenario for accessing data by using stored procedures only. This approach is often used by database administrators to limit how the datastore is accessed.

Note

You can also use stored procedures in LINQ to SQL applications to override default behavior, especially for Create, Update, and Delete processes. For more information, see Customizing Insert, Update, and Delete Operations.

For purposes of this walkthrough, you will use two methods that have been mapped to stored procedures in the Northwind sample database: CustOrdersDetail and CustOrderHist. The mapping occurs when you run the SqlMetal command-line tool to generate a Visual Basic file. For more information, see the Prerequisites section later in this walkthrough.

This walkthrough does not rely on the Object Relational Designer. Developers using Visual Studio can also use the O/R Designer to implement stored procedure functionality. See LINQ to SQL Tools in Visual Studio.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

This walkthrough was written by using Visual Basic Development Settings.

Prerequisites

This walkthrough requires the following:

  • This walkthrough uses a dedicated folder ("c:\linqtest3") to hold files. Create this folder before you begin the walkthrough.

  • The Northwind sample database.

    If you do not have this database on your development computer, you can download it from the Microsoft download site. For instructions, see Downloading Sample Databases. After you have downloaded the database, copy the northwnd.mdf file to the c:\linqtest3 folder.

  • A Visual Basic code file generated from the Northwind database.

    This walkthrough was written by using the SqlMetal tool with the following command line:

    sqlmetal /code:"c:\linqtest3\northwind.vb" /language:vb "c:\linqtest3\northwnd.mdf" /sprocs /functions /pluralize

    For more information, see SqlMetal.exe (Code Generation Tool).

Overview

This walkthrough consists of six main tasks:

  • Setting up the LINQ to SQL solution in Visual Studio.

  • Adding the System.Data.Linq assembly to the project.

  • Adding the database code file to the project.

  • Creating a connection to the database.

  • Setting up the user interface.

  • Running and testing the application.

Creating a LINQ to SQL Solution

In this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project.

To create a LINQ to SQL solution

  1. On the Visual Studio File menu, click New Project.

  2. In the Project types pane in the New Project dialog box, expand Visual Basic, and then click Windows.

  3. In the Templates pane, click Windows Forms Application.

  4. In the Name box, type SprocOnlyApp.

  5. Click OK.

    The Windows Forms Designer opens.

Adding the LINQ to SQL Assembly Reference

The LINQ to SQL assembly is not included in the standard Windows Forms Application template. You will have to add the assembly yourself, as explained in the following steps:

To add System.Data.Linq.dll

  1. In Solution Explorer, click Show All Files.

  2. In Solution Explorer, right-click References, and then click Add Reference.

  3. In the Add Reference dialog box, click .NET, click the System.Data.Linq assembly, and then click OK.

    The assembly is added to the project.

Adding the Northwind Code File to the Project

This step assumes that you have used the SqlMetal tool to generate a code file from the Northwind sample database. For more information, see the Prerequisites section earlier in this walkthrough.

To add the northwind code file to the project

  1. On the Project menu, click Add Existing Item.

  2. In the Add Existing Item dialog box, move to c:\linqtest3\northwind.vb, and then click Add.

    The northwind.vb file is added to the project.

Creating a Database Connection

In this step, you define the connection to the Northwind sample database. This walkthrough uses "c:\linqtest3\northwnd.mdf" as the path.

To create the database connection

  1. In Solution Explorer, right-click Form1.vb, and then click View Code.

    Class Form1 appears in the code editor.

  2. Type the following code into the Form1 code block:

    VB
    Dim db As New Northwnd("c:\linqtest3\northwnd.mdf")
    

Setting up the User Interface

In this task you create an interface so that users can execute stored procedures to access data in the database. In the application that you are developing with this walkthrough, users can access data in the database only by using the stored procedures embedded in the application.

To set up the user interface

  1. Return to the Windows Forms Designer (Form1.vb[Design]).

  2. On the View menu, click Toolbox.

    The toolbox opens.

    Note

    Click the AutoHide pushpin to keep the toolbox open while you perform the remaining steps in this section.

  3. Drag two buttons, two text boxes, and two labels from the toolbox onto Form1.

    Arrange the controls as in the accompanying illustration. Expand Form1 so that the controls fit easily.

  4. Right-click Label1, and then click Properties.

  5. Change the Text property from Label1 to Enter OrderID:.

  6. In the same way for Label2, change the Text property from Label2 to Enter CustomerID:.

  7. In the same way, change the Text property for Button1 to Order Details.

  8. Change the Text property for Button2 to Order History.

    Widen the button controls so that all the text is visible.

To handle button clicks

  1. Double-click Order Details on Form1 to create the Button1 event handler and open the code editor.

  2. Type the following code into the Button1 handler:

    VB
    ' Declare a variable to hold the contents of
    ' TextBox1 as an argument for the stored
    ' procedure.
    Dim parm As String = TextBox1.Text
    
    ' Declare a variable to hold the results returned
    ' by the stored procedure.
    Dim custQuery = db.CustOrdersDetail(parm)
    
    ' Clear the message box of previous results.
    Dim msg As String = ""
    Dim response As MsgBoxResult
    
    ' Execute the stored procedure and store the results.
    For Each custOrdersDetail As CustOrdersDetailResult In custQuery
        msg &= custOrdersDetail.ProductName & vbCrLf
    Next
    
    ' Display the results.
    If msg = "" Then
        msg = "No results."
    End If
    response = MsgBox(msg)
    
    ' Clear the variables before continuing.
    parm = ""
    TextBox1.Text = ""
    
  3. Now double-click Button2 on Form1 to create the Button2 event handler and open the code editor.

  4. Type the following code into the Button2 handler:

    VB
    ' Comments in the code for Button2 are the same
    ' as for Button1.
    Dim parm As String = TextBox2.Text
    
    Dim custQuery2 = db.CustOrderHist(parm)
    Dim msg As String = ""
    Dim response As MsgBoxResult
    
    For Each custOrdHist As CustOrderHistResult In custQuery2
        msg &= custOrdHist.ProductName & vbCrLf
    Next
    
    If msg = "" Then
        msg = "No results."
    End If
    
    response = MsgBox(msg)
    parm = ""
    TextBox2.Text = ""
    

Testing the Application

Now it is time to test your application. Note that your contact with the datastore is limited to whatever actions the two stored procedures can take. Those actions are to return the products included for any orderID you enter, or to return a history of products ordered for any CustomerID you enter.

To test the application

  1. Press F5 to start debugging.

    Form1 appears.

  2. In the Enter OrderID box, type 10249 and then click Order Details.

    A message box lists the products included in order 10249.

    Click OK to close the message box.

  3. In the Enter CustomerID box, type ALFKI, and then click Order History.

    A message box lists the order history for customer ALFKI.

    Click OK to close the message box.

  4. In the Enter OrderID box, type 123, and then click Order Details.

    A message box displays "No results."

    Click OK to close the message box.

  5. On the Debug menu, click Stop debugging.

    The debug session closes.

  6. If you have finished experimenting, you can click Close Project on the File menu, and save your project when you are prompted.

Next Steps

You can enhance this project by making some changes. For example, you could list available stored procedures in a list box and have the user select which procedures to execute. You could also stream the output of the reports to a text file.

See also

Walkthrough: Simple Object Model and Query (C#)

This walkthrough provides a fundamental end-to-end LINQ to SQL scenario with minimal complexities. You will create an entity class that models the Customers table in the sample Northwind database. You will then create a simple query to list customers who are located in London.

This walkthrough is code-oriented by design to help show LINQ to SQL concepts. Normally speaking, you would use the Object Relational Designer to create your object model.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

This walkthrough was written by using Visual C# Development Settings.

Prerequisites

  • This walkthrough uses a dedicated folder ("c:\linqtest5") to hold files. Create this folder before you begin the walkthrough.

  • This walkthrough requires the Northwind sample database. If you do not have this database on your development computer, you can download it from the Microsoft download site. For instructions, see Downloading Sample Databases. After you have downloaded the database, copy the file to the c:\linqtest5 folder.

Overview

This walkthrough consists of six main tasks:

  • Creating a LINQ to SQL solution in Visual Studio.

  • Mapping a class to a database table.

  • Designating properties on the class to represent database columns.

  • Specifying the connection to the Northwind database.

  • Creating a simple query to run against the database.

  • Executing the query and observing the results.

Creating a LINQ to SQL Solution

In this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project.

To create a LINQ to SQL solution

  1. On the Visual Studio File menu, point to New, and then click Project.

  2. In the Project types pane of the New Project dialog box, click Visual C#.

  3. In the Templates pane, click Console Application.

  4. In the Name box, type LinqConsoleApp.

  5. In the Location box, verify where you want to store your project files.

  6. Click OK.

Adding LINQ References and Directives

This walkthrough uses assemblies that might not be installed by default in your project. If System.Data.Linq is not listed as a reference in your project (expand the References node in Solution Explorer), add it, as explained in the following steps.

To add System.Data.Linq

  1. In Solution Explorer, right-click References, and then click Add Reference.

  2. In the Add Reference dialog box, click .NET, click the System.Data.Linq assembly, and then click OK.

    The assembly is added to the project.

  3. Add the following directives at the top of Program.cs:

    C#
    using System.Data.Linq;
    using System.Data.Linq.Mapping;
    

Mapping a Class to a Database Table

In this step, you create a class and map it to a database table. Such a class is termed an entity class. Note that the mapping is accomplished by just adding the TableAttribute attribute. The Name property specifies the name of the table in the database.

To create an entity class and map it to a database table

  • Type or paste the following code into Program.cs immediately above the Program class declaration:

    C#
    [Table(Name = "Customers")]
    public class Customer
    {
    }
    

Designating Properties on the Class to Represent Database Columns

In this step, you accomplish several tasks.

  • You use the ColumnAttribute attribute to designate CustomerID and City properties on the entity class as representing columns in the database table.

  • You designate the CustomerID property as representing a primary key column in the database.

  • You designate _CustomerID and _City fields for private storage. LINQ to SQL can then store and retrieve values directly, instead of using public accessors that might include business logic.

To represent characteristics of two database columns

  • Type or paste the following code into Program.cs inside the curly braces for the Customer class.

    C#
    private string _CustomerID;
    [Column(IsPrimaryKey=true, Storage="_CustomerID")]
    public string CustomerID
    {
        get
        {
            return this._CustomerID;
        }
        set
        {
            this._CustomerID = value;
        }
        
    }
    
    private string _City;
    [Column(Storage="_City")]
    public string City
    {
        get
        {
            return this._City;
        }
        set
        {
            this._City=value;
        }
    }
    

Specifying the Connection to the Northwind Database

In this step you use a DataContext object to establish a connection between your code-based data structures and the database itself. The DataContext is the main channel through which you retrieve objects from the database and submit changes.

You also declare a Table<Customer> to act as the logical, typed table for your queries against the Customers table in the database. You will create and execute these queries in later steps.

To specify the database connection

  • Type or paste the following code into the Main method.

    Note that the northwnd.mdf file is assumed to be in the linqtest5 folder. For more information, see the Prerequisites section earlier in this walkthrough.

    C#
    // Use a connection string.
    DataContext db = new DataContext
        (@"c:\linqtest5\northwnd.mdf");
    
    // Get a typed table to run queries.
    Table<Customer> Customers = db.GetTable<Customer>();
    

Creating a Simple Query

In this step, you create a query to find which customers in the database Customers table are located in London. The query code in this step just describes the query. It does not execute it. This approach is known as deferred execution. For more information, see Introduction to LINQ Queries (C#).

You will also produce a log output to show the SQL commands that LINQ to SQL generates. This logging feature (which uses Log) is helpful in debugging, and in determining that the commands being sent to the database accurately represent your query.

To create a simple query

  • Type or paste the following code into the Main method after the Table<Customer> declaration.

    C#
    // Attach the log to show generated SQL.
    db.Log = Console.Out;
    
    // Query for customers in London.
    IQueryable<Customer> custQuery =
        from cust in Customers
        where cust.City == "London"
        select cust;
    

Executing the Query

In this step, you actually execute the query. The query expressions you created in the previous steps are not evaluated until the results are needed. When you begin the foreach iteration, a SQL command is executed against the database and objects are materialized.

To execute the query

  1. Type or paste the following code at the end of the Main method (after the query description).

    C#
    foreach (Customer cust in custQuery)
    {
        Console.WriteLine("ID={0}, City={1}", cust.CustomerID,
            cust.City);
    }
    
    // Prevent console window from closing.
    Console.ReadLine();
    
  2. Press F5 to debug the application.

    Note

    If your application generates a run-time error, see the Troubleshooting section of Learning by Walkthroughs.

    The query results in the console window should appear as follows:

    ID=AROUT, City=London

    ID=BSBEV, City=London

    ID=CONSH, City=London

    ID=EASTC, City=London

    ID=NORTS, City=London

    ID=SEVES, City=London

  3. Press Enter in the console window to close the application.

Next Steps

The Walkthrough: Querying Across Relationships (C#) topic continues where this walkthrough ends. The Query Across Relationships walkthrough demonstrates how LINQ to SQL can query across tables, similar to joins in a relational database.

If you want to do the Query Across Relationships walkthrough, make sure to save the solution for the walkthrough you have just completed, which is a prerequisite.

See also

Walkthrough: Querying Across Relationships (C#)

This walkthrough demonstrates the use of LINQ to SQL associations to represent foreign-key relationships in the database.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

This walkthrough was written by using Visual C# Development Settings.

Prerequisites

You must have completed Walkthrough: Simple Object Model and Query (C#). This walkthrough builds on that one, including the presence of the northwnd.mdf file in c:\linqtest5.

Overview

This walkthrough consists of three main tasks:

  • Adding an entity class to represent the Orders table in the sample Northwind database.

  • Supplementing annotations to the Customer class to enhance the relationship between the Customer and Order classes.

  • Creating and running a query to test obtaining Order information by using the Customer class.

Mapping Relationships Across Tables

After the Customer class definition, create the Order entity class definition that includes the following code, which indicates that Order.Customer relates as a foreign key to Customer.CustomerID.

To add the Order entity class

  • Type or paste the following code after the Customer class:

    C#
    [Table(Name = "Orders")]
    public class Order
    {
        private int _OrderID = 0;
        private string _CustomerID;
        private EntityRef<Customer> _Customer;
        public Order() { this._Customer = new EntityRef<Customer>(); }
    
        [Column(Storage = "_OrderID", DbType = "Int NOT NULL IDENTITY",
        IsPrimaryKey = true, IsDbGenerated = true)]
        public int OrderID
        {
            get { return this._OrderID; }
            // No need to specify a setter because IsDBGenerated is
            // true.
        }
    
        [Column(Storage = "_CustomerID", DbType = "NChar(5)")]
        public string CustomerID
        {
            get { return this._CustomerID; }
            set { this._CustomerID = value; }
        }
    
        [Association(Storage = "_Customer", ThisKey = "CustomerID")]
        public Customer Customer
        {
            get { return this._Customer.Entity; }
            set { this._Customer.Entity = value; }
        }
    }
    

Annotating the Customer Class

In this step, you annotate the Customer class to indicate its relationship to the Order class. (This addition is not strictly necessary, because defining the relationship in either direction is sufficient to create the link. But adding this annotation does enable you to easily navigate objects in either direction.)

To annotate the Customer class

  • Type or paste the following code into the Customer class:

    C#
    private EntitySet<Order> _Orders;
    public Customer()
    {
        this._Orders = new EntitySet<Order>();
    }
    
    [Association(Storage = "_Orders", OtherKey = "CustomerID")]
    public EntitySet<Order> Orders
    {
        get { return this._Orders; }
        set { this._Orders.Assign(value); }
    }
    

Creating and Running a Query Across the Customer-Order Relationship

You can now access Order objects directly from the Customer objects, or in the opposite order. You do not need an explicit join between customers and orders.

To access Order objects by using Customer objects

  1. Modify the Main method by typing or pasting the following code into the method:

    C#
    // Query for customers who have placed orders.
    var custQuery = 
        from cust in Customers
        where cust.Orders.Any()
        select cust;
    
    foreach (var custObj in custQuery)
    {
        Console.WriteLine("ID={0}, Qty={1}", custObj.CustomerID,
            custObj.Orders.Count);
    }
    
  2. Press F5 to debug your application.

    Note

    You can eliminate the SQL code in the Console window by commenting out db.Log = Console.Out;.

  3. Press Enter in the Console window to stop debugging.

Creating a Strongly Typed View of Your Database

It is much easier to start with a strongly typed view of your database. By strongly typing the DataContext object, you do not need calls to GetTable. You can use strongly typed tables in all your queries when you use the strongly typed DataContext object.

In the following steps, you will create Customers as a strongly typed table that maps to the Customers table in the database.

To strongly type the DataContext object

  1. Add the following code above the Customer class declaration.

    C#
    public class Northwind : DataContext
    {
        // Table<T> abstracts database details per table/data type.
        public Table<Customer> Customers;
        public Table<Order> Orders;
    
        public Northwind(string connection) : base(connection) { }
    }
    
  2. Modify the Main method to use the strongly typed DataContext as follows:

    C#
    // Use a connection string.
    Northwind db = new Northwind(@"C:\linqtest5\northwnd.mdf");
    
    // Query for customers from Seattle. 
    var custQuery =
        from cust in db.Customers
        where cust.City == "Seattle"
        select cust;
    
    foreach (var custObj in custQuery)
    {
        Console.WriteLine("ID={0}", custObj.CustomerID);
    }
    // Freeze the console window.
    Console.ReadLine();
    
  3. Press F5 to debug your application.

    The Console window output is:

    ID=WHITC

  4. Press Enter in the console window to stop debugging.

Next Steps

The next walkthrough (Walkthrough: Manipulating Data (C#)) demonstrates how to manipulate data. That walkthrough does not require that you save the two walkthroughs in this series that you have already completed.

See also

Walkthrough: Manipulating Data (C#)

This walkthrough provides a fundamental end-to-end LINQ to SQL scenario for adding, modifying, and deleting data in a database. You will use a copy of the sample Northwind database to add a customer, change the name of a customer, and delete an order.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

This walkthrough was written by using Visual C# Development Settings.

Prerequisites

This walkthrough requires the following:

  • This walkthrough uses a dedicated folder ("c:\linqtest6") to hold files. Create this folder before you begin the walkthrough.

  • The Northwind sample database.

    If you do not have this database on your development computer, you can download it from the Microsoft download site. For instructions, see Downloading Sample Databases. After you have downloaded the database, copy the northwnd.mdf file to the c:\linqtest6 folder.

  • A C# code file generated from the Northwind database.

    You can generate this file by using either the Object Relational Designer or the SQLMetal tool. This walkthrough was written by using the SQLMetal tool with the following command line:

    sqlmetal /code:"c:\linqtest6\northwind.cs" /language:csharp "C:\linqtest6\northwnd.mdf" /pluralize

    For more information, see SqlMetal.exe (Code Generation Tool).

Overview

This walkthrough consists of six main tasks:

  • Creating the LINQ to SQL solution in Visual Studio.

  • Adding the database code file to the project.

  • Creating a new customer object.

  • Modifying the contact name of a customer.

  • Deleting an order.

  • Submitting these changes to the Northwind database.

Creating a LINQ to SQL Solution

In this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project.

To create a LINQ to SQL solution

  1. On the Visual Studio File menu, point to New, and then click Project.

  2. In the Project types pane in the New Project dialog box, click Visual C#.

  3. In the Templates pane, click Console Application.

  4. In the Name box, type LinqDataManipulationApp.

  5. In the Location box, verify where you want to store your project files.

  6. Click OK.

Adding LINQ References and Directives

This walkthrough uses assemblies that might not be installed by default in your project. If System.Data.Linq is not listed as a reference in your project, add it, as explained in the following steps:

To add System.Data.Linq

  1. In Solution Explorer, right-click References, and then click Add Reference.

  2. In the Add Reference dialog box, click .NET, click the System.Data.Linq assembly, and then click OK.

    The assembly is added to the project.

  3. Add the following directives at the top of Program.cs:

    C#
    using System.Data.Linq;
    using System.Data.Linq.Mapping;
    

Adding the Northwind Code File to the Project

These steps assume that you have used the SQLMetal tool to generate a code file from the Northwind sample database. For more information, see the Prerequisites section earlier in this walkthrough.

To add the northwind code file to the project

  1. On the Project menu, click Add Existing Item.

  2. In the Add Existing Item dialog box, navigate to c:\linqtest6\northwind.cs, and then click Add.

    The northwind.cs file is added to the project.

Setting Up the Database Connection

First, test your connection to the database. Note especially that the database, Northwnd, has no i character. If you generate errors in the next steps, review the northwind.cs file to determine how the Northwind partial class is spelled.

To set up and test the database connection

  1. Type or paste the following code into the Main method in the Program class:

    C#
    // Use the following connection string.
    Northwnd db = new Northwnd(@"c:\linqtest6\northwnd.mdf");
    
    // Keep the console window open after activity stops.
    Console.ReadLine();
    
  2. Press F5 to test the application at this point.

    A Console window opens.

    You can close the application by pressing Enter in the Console window, or by clicking Stop Debugging on the Visual Studio Debug menu.

Creating a New Entity

Creating a new entity is straightforward. You can create objects (such as Customer) by using the new keyword.

In this and the following sections, you are making changes only to the local cache. No changes are sent to the database until you call SubmitChanges toward the end of this walkthrough.

To add a new Customer entity object

  1. Create a new Customer by adding the following code before Console.ReadLine(); in the Main method:

    C#
    // Create the new Customer object.
    Customer newCust = new Customer();
    newCust.CompanyName = "AdventureWorks Cafe";
    newCust.CustomerID = "ADVCA";
    
    // Add the customer to the Customers table.
    db.Customers.InsertOnSubmit(newCust);
    
    Console.WriteLine("\nCustomers matching CA before insert");
    
    foreach (var c in db.Customers.Where(cust => cust.CustomerID.Contains("CA")))
    {
        Console.WriteLine("{0}, {1}, {2}",
            c.CustomerID, c.CompanyName, c.Orders.Count);
    }
    
  2. Press F5 to debug the solution.

  3. Press Enter in the Console window to stop debugging and continue the walkthrough.

Updating an Entity

In the following steps, you will retrieve a Customer object and modify one of its properties.

To change the name of a Customer

  • Add the following code above Console.ReadLine();:

    C#
    // Query for specific customer.
    // First() returns one object rather than a collection.
    var existingCust =
        (from c in db.Customers
         where c.CustomerID == "ALFKI"
         select c)
        .First();
    
    // Change the contact name of the customer.
    existingCust.ContactName = "New Contact";
    

Deleting an Entity

Using the same customer object, you can delete the first order.

The following code demonstrates how to sever relationships between rows, and how to delete a row from the database. Add the following code before Console.ReadLine to see how objects can be deleted:

To delete a row

  • Add the following code just above Console.ReadLine();:

    C#
    // Access the first element in the Orders collection.
    Order ord0 = existingCust.Orders[0];
    
    // Access the first element in the OrderDetails collection.
    OrderDetail detail0 = ord0.OrderDetails[0];
    
    // Display the order to be deleted.
    Console.WriteLine
        ("The Order Detail to be deleted is: OrderID = {0}, ProductID = {1}",
        detail0.OrderID, detail0.ProductID);
    
    // Mark the Order Detail row for deletion from the database.
    db.OrderDetails.DeleteOnSubmit(detail0);
    

Submitting Changes to the Database

The final step required for creating, updating, and deleting objects, is to actually submit the changes to the database. Without this step, your changes are only local and will not appear in query results.

To submit changes to the database

  1. Insert the following code just above Console.ReadLine:

    C#
    db.SubmitChanges();
    
  2. Insert the following code (after SubmitChanges) to show the before and after effects of submitting the changes:

    C#
    Console.WriteLine("\nCustomers matching CA after update");
    foreach (var c in db.Customers.Where(cust =>
        cust.CustomerID.Contains("CA")))
    {
        Console.WriteLine("{0}, {1}, {2}",
            c.CustomerID, c.CompanyName, c.Orders.Count);
    }
    
  3. Press F5 to debug the solution.

  4. Press Enter in the Console window to close the application.

Note

After you have added the new customer by submitting the changes, you cannot execute this solution again as is. To execute the solution again, change the name of the customer and customer ID to be added.

See also

Walkthrough: Using Only Stored Procedures (C#)

This walkthrough provides a basic end-to-end LINQ to SQL scenario for accessing data by executing stored procedures only. This approach is often used by database administrators to limit how the datastore is accessed.

Note

You can also use stored procedures in LINQ to SQL applications to override default behavior, especially for Create, Update, and Delete processes. For more information, see Customizing Insert, Update, and Delete Operations.

For purposes of this walkthrough, you will use two methods that have been mapped to stored procedures in the Northwind sample database: CustOrdersDetail and CustOrderHist. The mapping occurs when you run the SqlMetal command-line tool to generate a C# file. For more information, see the Prerequisites section later in this walkthrough.

This walkthrough does not rely on the Object Relational Designer. Developers using Visual Studio can also use the O/R Designer to implement stored procedure functionality. See LINQ to SQL Tools in Visual Studio.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

This walkthrough was written by using Visual C# Development Settings.

Prerequisites

This walkthrough requires the following:

  • This walkthrough uses a dedicated folder ("c:\linqtest7") to hold files. Create this folder before you begin the walkthrough.

  • The Northwind sample database.

    If you do not have this database on your development computer, you can download it from the Microsoft download site. For instructions, see Downloading Sample Databases. After you have downloaded the database, copy the northwnd.mdf file to the c:\linqtest7 folder.

  • A C# code file generated from the Northwind database.

    This walkthrough was written by using the SqlMetal tool with the following command line:

    sqlmetal /code:"c:\linqtest7\northwind.cs" /language:csharp "c:\linqtest7\northwnd.mdf" /sprocs /functions /pluralize

    For more information, see SqlMetal.exe (Code Generation Tool).

Overview

This walkthrough consists of six main tasks:

  • Setting up the LINQ to SQL solution in Visual Studio.

  • Adding the System.Data.Linq assembly to the project.

  • Adding the database code file to the project.

  • Creating a connection with the database.

  • Setting up the user interface.

  • Running and testing the application.

Creating a LINQ to SQL Solution

In this first task, you create a Visual Studio solution that contains the necessary references to build and run a LINQ to SQL project.

To create a LINQ to SQL solution

  1. On the Visual Studio File menu, point to New, and then click Project.

  2. In the Project types pane in the New Project dialog box, click Visual C#.

  3. In the Templates pane, click Windows Forms Application.

  4. In the Name box, type SprocOnlyApp.

  5. In the Location box, verify where you want to store your project files.

  6. Click OK.

    The Windows Forms Designer opens.

Adding the LINQ to SQL Assembly Reference

The LINQ to SQL assembly is not included in the standard Windows Forms Application template. You will have to add the assembly yourself, as explained in the following steps:

To add System.Data.Linq.dll

  1. In Solution Explorer, right-click References, and then click Add Reference.

  2. In the Add Reference dialog box, click .NET, click the System.Data.Linq assembly, and then click OK.

    The assembly is added to the project.

Adding the Northwind Code File to the Project

This step assumes that you have used the SqlMetal tool to generate a code file from the Northwind sample database. For more information, see the Prerequisites section earlier in this walkthrough.

To add the northwind code file to the project

  1. On the Project menu, click Add Existing Item.

  2. In the Add Existing Item dialog box, move to c:\linqtest7\northwind.cs, and then click Add.

    The northwind.cs file is added to the project.

Creating a Database Connection

In this step, you define the connection to the Northwind sample database. This walkthrough uses "c:\linqtest7\northwnd.mdf" as the path.

To create the database connection

  1. In Solution Explorer, right-click Form1.cs, and then click View Code.

  2. Type the following code into the Form1 class:

    C#
  1. Northwnd db = new Northwnd(@"c:\linqtest7\northwnd.mdf");
    

Setting up the User Interface

In this task you set up an interface so that users can execute stored procedures to access data in the database. In the applications that you are developing with this walkthrough, users can access data in the database only by using the stored procedures embedded in the application.

To set up the user interface

  1. Return to the Windows Forms Designer (Form1.cs[Design]).

  2. On the View menu, click Toolbox.

    The toolbox opens.

    Note

    Click the AutoHide pushpin to keep the toolbox open while you perform the remaining steps in this section.

  3. Drag two buttons, two text boxes, and two labels from the toolbox onto Form1.

    Arrange the controls as in the accompanying illustration. Expand Form1 so that the controls fit easily.

  4. Right-click label1, and then click Properties.

  5. Change the Text property from label1 to Enter OrderID:.

  6. In the same way for label2, change the Text property from label2 to Enter CustomerID:.

  7. In the same way, change the Text property for button1 to Order Details.

  8. Change the Text property for button2 to Order History.

    Widen the button controls so that all the text is visible.

To handle button clicks

  1. Double-click Order Details on Form1 to open the button1 event handler in the code editor.

  2. Type the following code into the button1 handler:

    C#
    // Declare a variable to hold the contents of
    // textBox1 as an argument for the stored
    // procedure.
    string param = textBox1.Text;
    
    // Declare a variable to hold the results
    // returned by the stored procedure.
    var custquery = db.CustOrdersDetail(Convert.ToInt32(param));
    
    // Execute the stored procedure and display the results.
    string msg = "";
    foreach (CustOrdersDetailResult custOrdersDetail in custquery)
    {
        msg = msg + custOrdersDetail.ProductName + "\n";
    }
    if (msg == "")
        msg = "No results.";
    MessageBox.Show(msg);
    
    // Clear the variables before continuing.
    param = "";
    textBox1.Text = "";
    
  3. Now double-click button2 on Form1 to open the button2 handler

  4. Type the following code into the button2 handler:

    C#
    // Comments in the code for button2 are the same
    // as for button1.
    string param = textBox2.Text;
    
    var custquery = db.CustOrderHist(param);
    
    string msg = "";
    foreach (CustOrderHistResult custOrdHist in custquery)
    {
        msg = msg + custOrdHist.ProductName + "\n";
    }
    MessageBox.Show(msg);
    
    param = "";
    textBox2.Text = "";
    

Testing the Application

Now it is time to test your application. Note that your contact with the datastore is limited to whatever actions the two stored procedures can take. Those actions are to return the products included for any orderID you enter, or to return a history of products ordered for any CustomerID you enter.

To test the application

  1. Press F5 to start debugging.

    Form1 appears.

  2. In the Enter OrderID box, type 10249, and then click Order Details.

    A message box lists the products included in order 10249.

    Click OK to close the message box.

  3. In the Enter CustomerID box, type ALFKI, and then click Order History.

    A message box appears that lists the order history for customer ALFKI.

    Click OK to close the message box.

  4. In the Enter OrderID box, type 123, and then click Order Details.

    A message box appears that displays "No results."

    Click OK to close the message box.

  5. On the Debug menu, click Stop debugging.

    The debug session closes.

  6. If you have finished experimenting, you can click Close Project on the File menu, and save your project when you are prompted.

Next Steps

You can enhance this project by making some changes. For example, you could list available stored procedures in a list box and have the user select which procedures to execute. You could also stream the output of the reports to a text file.

See also

Programming Guide

This section contains information about how to create and use your LINQ to SQL object model. If you are using Visual Studio, you can also use the Object Relational Designer to perform many of these same tasks.

You can also search Microsoft Docs for specific issues, and you can participate in the LINQ Forum, where you can discuss more complex topics in detail with experts. Finally, the LINQ to SQL: .NET Language-Integrated Query for Relational Data white paper details LINQ to SQL technology, complete with Visual Basic and C# code examples.

In This Section

Creating the Object Model
Describes how to generate an object model.

Communicating with the Database
Describes how to use a DataContext object as a conduit to the database.

Querying the Database
Describes how to execute queries in LINQ to SQL, and provides many examples.

Making and Submitting Data Changes
Describes how change data in the database.

Debugging Support
Describes the support available for debugging LINQ to SQL projects.

Background Information
Includes additional items, such as concurrency conflict resolution, creating new databases, and more, for more advanced users.

Related Sections

LINQ to SQL
Provides links to topics that explain the LINQ to SQL technology and demonstrate features.

Stored Procedures
Includes links to topics that illustrate how to use stored procedures.

Introduction to LINQ (C#)
Provides resources to help you begin to learn about LINQ to SQL using C#.

Introduction to LINQ (Visual Basic)
Provides resources to help you begin to learn about LINQ to SQL using Visual Basic.

Creating the Object Model

You can create your object model from an existing database and use the model in its default state. You can also customize many aspects of the model and its behavior.

If you are using Visual Studio, you can use the Object Relational Designer to create your object model.

In This Section

How to: Generate the Object Model in Visual Basic or C#
Describes how to use the SQLMetal command-line tool. Also provides a link to the Object Relational Designer for Visual Studio users

How to: Generate the Object Model as an External File
Describes how to generate an external mapping file instead of using attribute-based mapping.

How to: Generate Customized Code by Modifying a DBML File
Describes how to generate Visual Basic or C# code from a DBML metadata file.

How to: Validate DBML and External Mapping Files
Describes how to validate mapping files that you have modified (advanced).

How to: Make Entities Serializable
Describes how to add appropriate attributes to make entities serializable.

How to: Customize Entity Classes by Using the Code Editor
Describes how to use the code editor to write your own mapping code, or customize code that has been autogenerated.

Related Sections

The LINQ to SQL Object Model
Provides details about the LINQ to SQL object model.

Typical Steps for Using LINQ to SQL
Explains the typical steps that you follow to implement a LINQ to SQL application.

How to: Generate the Object Model in Visual Basic or C#

In LINQ to SQL, an object model in your own programming language is mapped to a relational database. Two tools are available for automatically generating a Visual Basic or C# model from the metadata of an existing database.

Documentation for the O/R Designer provides examples of how to generate a Visual Basic or C# object model by using the O/R Designer. The following information provide examples of how to use the SQLMetal command-line tool. For more information, see SqlMetal.exe (Code Generation Tool).

Example

The SQLMetal command line shown in the following example produces Visual Basic code as the attribute-based object model of the Northwind sample database. Stored procedures and functions are also rendered.

sqlmetal /code:northwind.vb /language:vb "c:\northwnd.mdf" /sprocs /functions  

Example

The SQLMetal command line shown in the following example produces C# code as the attribute-based object model of the Northwind sample database. Stored procedures and functions are also rendered, and table names are automatically pluralized.

sqlmetal /code:northwind.cs /language:csharp "c:\northwnd.mdf" /sprocs /functions /pluralize  

See also

How to: Generate the Object Model as an External File

As an alternative to attribute-based mapping, you can generate your object model as an external XML file by using the SQLMetal command-line tool. For more information, see SqlMetal.exe (Code Generation Tool). By using an external XML mapping file, you reduce clutter in your code. You can also change behavior by modifying the external file without recompiling the binaries of your application. For more information, see External Mapping.

Note

The Object Relational Designer does not support generation of an external mapping file.

Example

The following command generates an external mapping file from the Northwind sample database.

sqlmetal /server:myserver /database:northwind /map:externalfile.xml  

Example

The following excerpt from an external mapping file shows the mapping for the Customers table in the Northwind sample database. This excerpt was generated by executing SQLMetal with the /map option.

XML
<?xml version="1.0" encoding="utf-8"?>  
<Database xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Name="northwnd">  
  <Table Name="Customers">  
    <Type Name=".Customer">  
      <Column Name="CustomerID" Member="CustomerID" Storage="_CustomerID" DbType="NChar(5) NOT NULL" CanBeNull="False" IsPrimaryKey="True" />  
      <Column Name="CompanyName" Member="CompanyName" Storage="_CompanyName" DbType="NVarChar(40) NOT NULL" CanBeNull="False" />  
      <Column Name="ContactName" Member="ContactName" Storage="_ContactName" DbType="NVarChar(30)" />  
      <Column Name="ContactTitle" Member="ContactTitle" Storage="_ContactTitle" DbType="NVarChar(30)" />  
      <Column Name="Address" Member="Address" Storage="_Address" DbType="NVarChar(60)" />  
      <Column Name="City" Member="City" Storage="_City" DbType="NVarChar(15)" />  
      <Column Name="Region" Member="Region" Storage="_Region" DbType="NVarChar(15)" />  
      <Column Name="PostalCode" Member="PostalCode" Storage="_PostalCode" DbType="NVarChar(10)" />  
      <Column Name="Country" Member="Country" Storage="_Country" DbType="NVarChar(15)" />  
      <Column Name="Phone" Member="Phone" Storage="_Phone" DbType="NVarChar(24)" />  
      <Column Name="Fax" Member="Fax" Storage="_Fax" DbType="NVarChar(24)" />  
      <Association Name="FK_CustomerCustomerDemo_Customers" Member="CustomerCustomerDemos" Storage="_CustomerCustomerDemos" ThisKey="CustomerID" OtherTable="CustomerCustomerDemo" OtherKey="CustomerID" DeleteRule="NO ACTION" />  
      <Association Name="FK_Orders_Customers" Member="Orders" Storage="_Orders" ThisKey="CustomerID" OtherTable="Orders" OtherKey="CustomerID" DeleteRule="NO ACTION" />  
    </Type>  
  </Table>  
</Database>  

See also

How to: Generate Customized Code by Modifying a DBML File

You can generate Visual Basic or C# source code from a database markup language (.dbml) metadata file. This approach provides an opportunity to customize the default .dbml file before you generate the application mapping code. This is an advanced feature.

The steps in this process are as follows:

  1. Generate a .dbml file.

  2. Use an editor to modify the .dbml file. Note that the .dbml file must validate against the schema definition (.xsd) file for LINQ to SQL .dbml files. For more information, see Code Generation in LINQ to SQL.

  3. Generate the Visual Basic or C# source code.

The following examples use the SQLMetal command-line tool. For more information, see SqlMetal.exe (Code Generation Tool).

Example

The following code generates a .dbml file from the Northwind sample database. As source for the database metadata, you can use either the name of the database or the name of the .mdf file.

sqlmetal /server:myserver /database:northwind /dbml:mymeta.dbml  
sqlmetal /dbml:mymeta.dbml mydbfile.mdf  

Example

The following code generates Visual Basic or C# source code file from a .dbml file.

sqlmetal /namespace:nwind /code:nwind.vb /language:vb DBMLFile.dbml  
sqlmetal /namespace:nwind /code:nwind.cs /language:csharp DBMLFile.dbml  

See also

How to: Validate DBML and External Mapping Files

External mapping files and .dbml files that you modify must be validated against their respective schema definitions. This topic provides Visual Studio users with the steps to implement the validation process.

Note

Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the IDE.

To validate a .dbml or XML file

  1. On the Visual Studio File menu, point to Open, and then click File.

  2. In the Open File dialog box, click the .dbml or XML mapping file that you want to validate.

    The file opens in the XML Editor.

  3. Right-click the window, and then click Properties.

  4. In the Properties window, click the ellipsis for the Schemas property.

    The XML Schemas dialog box opens.

  5. Note the appropriate schema definition for your purpose.

    • DbmlSchema.xsd is the schema definition for validating a .dbml file. For more information, see Code Generation in LINQ to SQL.

    • LinqToSqlMapping.xsd is the schema definition for validating an external XML mapping file. For more information, see External Mapping.

  6. In the Use column of the desired schema definition row, click to open the drop-down box, and then click Use this schema.

    The schema definition file is now associated with your DBML or XML mapping file.

    Make sure no other schema definitions are selected.

  7. On the View menu, click Error List.

    Determine whether errors, warnings, or messages have been generated. If not, the XML file is valid against the schema definition.

Alternate Method for Supplying Schema Definition

If for some reason the appropriate .xsd file does not appear in the XML Schemas dialog box, you can download the .xsd file from a Help topic. The following steps help you save the downloaded file in the Unicode format required by the Visual Studio XML Editor.

To copy a schema definition file from a Help topic

  1. Locate the Help topic that contains the schema definition as described earlier in this topic.

  2. Click Copy Code to copy the code file to the Clipboard.

  3. Start Notepad to create a new file.

  4. Paste the code from the clipboard into Notepad file.

  5. On the Notepad File menu, click Save As.

  6. In the Encoding box, select Unicode.

    Important

    This selection guarantees that the Unicode-16 byte-order marker (FFFE) is prepended to the text file.

  7. In the File name box, create a file name with an .xsd extension.

See also

How to: Make Entities Serializable

You can make entities serializable when you generate your code. Entity classes are decorated with the DataContractAttribute attribute, and columns with the DataMemberAttribute attribute.

Developers using Visual Studio can use the Object Relational Designer for this purpose.

If you are using the SQLMetal command-line tool, use the /serialization option with the unidirectional argument. For more information, see SqlMetal.exe (Code Generation Tool).

Example

The following SQLMetal command lines produce files that have serializable entities.

sqlmetal /code:nwserializable.vb /language:vb "c:\northwnd.mdf" /sprocs /functions /pluralize /serialization:unidirectional  
sqlmetal /code:nwserializable.cs /language:csharp "c:\northwnd.mdf" /sprocs /functions /pluralize /serialization:unidirectional  

See also

How to: Customize Entity Classes by Using the Code Editor

Developers using Visual Studio can use the Object Relational Designer to create or customize their entity classes.

You can also use the Visual Studio code editor to write your own mapping code or to customize code that has already been generated. For more information, see Attribute-Based Mapping.

The topics in this section describe how to customize your object model.

How to: Specify Database Names
Describes how to use Name.

How to: Represent Tables as Classes
Describes how to use TableAttribute.

How to: Represent Columns as Class Members
Describes how to use ColumnAttribute.

How to: Represent Primary Keys
Describes how to use IsPrimaryKey.

How to: Map Database Relationships
Provides examples of using the AssociationAttribute attribute.

How to: Represent Columns as Database-Generated
Describes how to use IsDbGenerated.

How to: Represent Columns as Timestamp or Version Columns
Describes how to use IsVersion.

How to: Specify Database Data Types
Describes how to use DbType.

How to: Represent Computed Columns
Describes how to use Expression.

How to: Specify Private Storage Fields
Describes how to use Storage.

How to: Represent Columns as Allowing Null Values
Describes how to use CanBeNull.

How to: Map Inheritance Hierarchies
Describes the mappings required to specify an inheritance hierarchy.

How to: Specify Concurrency-Conflict Checking
Describes how to use UpdateCheck.

See also

How to: Specify Database Names

Use the Name property on a DatabaseAttribute attribute to specify the name of a database when a name is not supplied by the connection.

For code samples, see Name.

To specify the name of the database

  1. Add the DatabaseAttribute attribute to the class declaration for the database.

  2. Add the Name property to the DatabaseAttribute attribute.

  3. Set the Name property value to the name that you want to specify.

See also

How to: Represent Tables as Classes

Use the LINQ to SQL TableAttribute attribute to designate a class as an entity class associated with a database table.

To map a class to a database table

Example

The following code establishes the Customer class as an entity class that is associated with the Customers database table.

C#
[Table(Name = "Customers")]
public class Customer
{
    // ...
}

You do not have to specify the Name property if the name can be inferred. If you do not specify a name, the name is presumed to be the same name as that of the property or field.

See also

How to: Represent Columns as Class Members

Use the LINQ to SQL ColumnAttribute attribute to associate a field or property with a database column.

To map a field or property to a database column

Example

The following code maps the CustomerID field in the Customer class to the CustomerID column in the Customers database table.

C#
[Table(Name="Customers")]
public class customer
{
    [Column(Name="CustomerID")]
    public string CustomerID;
    // ...
}

You do not have to specify the Name property if the name can be inferred. If you do not specify a name, the name is presumed to be the same name as that of the property or field.

See also

How to: Represent Primary Keys

Use the LINQ to SQL IsPrimaryKey property on the ColumnAttribute attribute to designate a property or field to represent the primary key for a database column.

For code examples, see IsPrimaryKey.

Note

LINQ to SQL does not support computed columns as primary keys.

To designate a property or field as a primary key

  1. Add the IsPrimaryKey property to the ColumnAttribute attribute.

  2. Specify the value as true.

See also

How to: Map Database Relationships

You can encode as property references in your entity class any data relationships that will always be the same. In the Northwind sample database, for example, because customers typically place orders, there is always a relationship in the model between customers and their orders.

LINQ to SQL defines an AssociationAttribute attribute to help represent such relationships. This attribute is used together with the EntitySet<TEntity> and EntityRef<TEntity> types to represent what would be a foreign key relationship in a database. For more information, see the Association Attribute section of Attribute-Based Mapping.

Note

AssociationAttribute and ColumnAttribute Storage property values are case sensitive. For example, ensure that values used in the attribute for the AssociationAttribute.Storage property match the case for the corresponding property names used elsewhere in the code. This applies to all .NET programming languages, even those which are not typically case sensitive, including Visual Basic. For more information about the Storage property, see DataAttribute.Storage.

Most relationships are one-to-many, as in the example later in this topic. You can also represent one-to-one and many-to-many relationships as follows:

  • One-to-one: Represent this kind of relationship by including EntitySet<TEntity> on both sides.

    For example, consider a Customer-SecurityCode relationship, created so that the customer's security code will not be found in the Customer table and can be accessed only by authorized persons.

  • Many-to-many: In many-to-many relationships, the primary key of the link table (also named the junction table) is often formed by a composite of the foreign keys from the other two tables.

    For example, consider an Employee-Project many-to-many relationship formed by using link table EmployeeProject. LINQ to SQL requires that such a relationship be modeled by using three classes: Employee, Project, and EmployeeProject. In this case, changing the relationship between an Employee and a Project can appear to require an update of the primary key EmployeeProject. However, this situation is best modeled as deleting an existing EmployeeProject and the creating a new EmployeeProject.

    Note

    Relationships in relational databases are typically modeled as foreign key values that refer to primary keys in other tables. To navigate between them you explicitly associate the two tables by using a relational join operation.

    Objects in LINQ to SQL, on the other hand, refer to each other by using property references or collections of references that you navigate by using dot notation.

Example

In the following one-to-many example, the Customer class has a property that declares the relationship between customers and their orders. The Orders property is of type EntitySet<TEntity>. This type signifies that this relationship is one-to-many (one customer to many orders). The OtherKey property is used to describe how this association is accomplished, namely, by specifying the name of the property in the related class to be compared with this one. In this example, the CustomerID property is compared, just as a database join would compare that column value.

Note

If you are using Visual Studio, you can use the Object Relational Designer to create an association between classes.

C#
[Table(Name = "Customers")]
public partial class Customer
{
    [Column(IsPrimaryKey = true)]
    public string CustomerID;
    // ...
    private EntitySet<Order> _Orders;
    [Association(Storage = "_Orders", OtherKey = "CustomerID")]
    public EntitySet<Order> Orders
    {
        get { return this._Orders; }
        set { this._Orders.Assign(value); }
    }
}

Example

You can also reverse the situation. Instead of using the Customer class to describe the association between customers and orders, you can use the Order class. The Order class uses the EntityRef<TEntity> type to describe the relationship back to the customer, as in the following code example.

Note

The EntityRef<TEntity> class supports deferred loading. For more information, see Deferred versus Immediate Loading.

C#
[Table(Name = "Orders")]
public class Order
{
    [Column(IsPrimaryKey = true)]
    public int OrderID;
    [Column]
    public string CustomerID;
    private EntityRef<Customer> _Customer;
    [Association(Storage = "_Customer", ThisKey = "CustomerID")]
    public Customer Customer
    {
        get { return this._Customer.Entity; }
        set { this._Customer.Entity = value; }
    }
}

See also

How to: Represent Columns as Database-Generated

Use the LINQ to SQL IsDbGenerated property on the ColumnAttribute attribute to designate a field or property as representing a database-generated column.

For code examples, see IsDbGenerated.

To designate a field or property as representing a database-generated column

  1. Add the IsDbGenerated property to the ColumnAttribute attribute.

  2. Set the property value to true.

See also

How to: Represent Columns as Timestamp or Version Columns

Use the LINQ to SQL IsVersion property of the ColumnAttribute attribute to designate a field or property as representing a database column that holds database timestamps or version numbers.

For code examples, see IsVersion.

To designate a field or property as representing a timestamp or version column

  1. Add the IsVersion property to the ColumnAttribute attribute.

  2. Set the IsVersion property value to true.

See also

How to: Specify Database Data Types

Use the LINQ to SQL DbType property on a ColumnAttribute attribute to specify the exact text that defines the column in a T-SQL table declaration.

You must specify the DbType property only if you plan to use CreateDatabase to create an instance of the database.

For code examples, see DbType.

To specify text to define a data type in a T-SQL table

  1. Add the DbType property to the ColumnAttribute attribute.

  2. Set the value of the DbType property to the exact text that is used by T-SQL.

See also

How to: Represent Computed Columns

Use the LINQ to SQL Expression property on a ColumnAttribute attribute to represent a column whose contents are the result of calculation.

For code examples, see Expression.

Note

LINQ to SQL does not support computed columns as primary keys.

To represent a computed column

  1. Add the Expression property to the ColumnAttribute attribute.

  2. Assign a string representation of the formula to the Expression property.

See also

How to: Specify Private Storage Fields

Use the LINQ to SQL Storage property on the DataAttribute attribute to designate the name of an underlying storage field.

For code examples, see Storage.

To specify the name of an underlying storage field

  1. Add the Storage property to the ColumnAttribute attribute.

  2. Assign the name of the field as the value of the Storage property.

See also

How to: Represent Columns as Allowing Null Values

Use the LINQ to SQL CanBeNull property on the ColumnAttribute attribute to specify that the associated database column can hold null values.

For code examples, see CanBeNull.

To designate a column as allowing null values

  1. Add the CanBeNull property to the ColumnAttribute attribute.

  2. Set the CanBeNull property value to true.

See also

How to: Map Inheritance Hierarchies

To implement inheritance mapping in LINQ, you must specify the attributes and attribute properties on the root class of the inheritance hierarchy as described in the following steps. Developers using Visual Studio can use the Object Relational Designer to map inheritance hierarchies. See How to: Configure inheritance by using the O/R Designer.

Note

No special attributes or properties are required on the subclasses. Note especially that subclasses do not have the TableAttribute attribute.

To map an inheritance hierarchy

  1. Add the TableAttribute attribute to the root class.

  2. Also to the root class, add an InheritanceMappingAttribute attribute for each class in the hierarchy structure.

  3. For each InheritanceMappingAttribute attribute, define a Code property.

    This property holds a value that appears in the database table in the IsDiscriminator column to indicate which class or subclass this row of data belongs to.

  4. For each InheritanceMappingAttribute attribute, also add a Type property.

    This property holds a value that specifies which class or subclass the key value signifies.

  5. On only one of the InheritanceMappingAttribute attributes, add an IsDefault property.

    This property serves to designate a fallback mapping when the discriminator value from the database table does not match any Code value in the inheritance mappings.

  6. Add an IsDiscriminator property for a ColumnAttribute attribute.

    This property signifies that this is the column that holds the Code value.

Example

Note

If you are using Visual Studio, you can use the Object Relational Designer to configure inheritance. See How to: Configure inheritance by using the O/R Designer

In the following code example, Vehicle is defined as the root class, and the previous steps have been implemented to describe the hierarchy for LINQ.

C#
[Table]
[InheritanceMapping(Code = "C", Type = typeof(Car))]
[InheritanceMapping(Code = "T", Type = typeof(Truck))]
[InheritanceMapping(Code = "V", Type = typeof(Vehicle),
    IsDefault = true)]
public class Vehicle
{
    [Column(IsDiscriminator = true)]
    public string DiscKey;
    [Column(IsPrimaryKey = true)]
    public string VIN;
    [Column]
    public string MfgPlant;
}
public class Car : Vehicle
{
    [Column]
    public int TrimCode;
    [Column]
    public string ModelName;
}

public class Truck : Vehicle
{
    [Column]
    public int Tonnage;
    [Column]
    public int Axles;
}

See also

How to: Specify Concurrency-Conflict Checking

You can specify which columns of the database are to be checked for concurrency conflicts when you call SubmitChanges. For more information, see How to: Specify Which Members are Tested for Concurrency Conflicts.

Example

The following code specifies that the HomePage member should never be tested during update checks. For more information, see UpdateCheck.

C#
[Column(Storage="_HomePage", DbType="NText", UpdateCheck=UpdateCheck.Never)]
public string HomePage
{
    get
    {
        return this._HomePage;
    }
    set
    {
        if ((this._HomePage != value))
    {
        this.OnHomePageChanging(value);
        this.SendPropertyChanging();
            this._HomePage = value;
        this.SendPropertyChanged("HomePage");
            this.OnHomePageChanged();
    }
    }
}

See also

Communicating with the Database

The topics in this section describe some basic aspects of how you establish and maintain communication with the database.

In This Section

How to: Connect to a Database
Describes how to use the DataContext class to connect to a database.

How to: Directly Execute SQL Commands
Describes how you can use ExecuteCommand to send SQL-language commands.

How to: Reuse a Connection Between an ADO.NET Command and a DataContext
Provides examples of how to use an existing ADO.NET connection in a LINQ to SQL application.

See also

Communicating with the Database

The topics in this section describe some basic aspects of how you establish and maintain communication with the database.

In This Section

How to: Connect to a Database
Describes how to use the DataContext class to connect to a database.

How to: Directly Execute SQL Commands
Describes how you can use ExecuteCommand to send SQL-language commands.

How to: Reuse a Connection Between an ADO.NET Command and a DataContext
Provides examples of how to use an existing ADO.NET connection in a LINQ to SQL application.

See also

How to: Connect to a Database

The DataContext is the main conduit by which you connect to a database, retrieve objects from it, and submit changes back to it. You use the DataContext just as you would use an ADO.NET SqlConnection. In fact, the DataContext is initialized with a connection or connection string that you supply. For more information, see DataContext Methods (O/R Designer).

The purpose of the DataContext is to translate your requests for objects into SQL queries to be made against the database, and then to assemble objects out of the results. The DataContext enables Language-Integrated Query (LINQ) by implementing the same operator pattern as the Standard Query Operators, such as Where and Select.

Important

Maintaining a secure connection is of the highest importance. For more information, see Security in LINQ to SQL.

Example

In the following example, the DataContext is used to connect to the Northwind sample database and to retrieve rows of customers whose city is London.

C#
// DataContext takes a connection string. 
DataContext db = new DataContext(@"c:\Northwind.mdf");

// Get a typed table to run queries.
Table<Customer> Customers = db.GetTable<Customer>();

// Query for customers from London.
var query =
    from cust in Customers
    where cust.City == "London"
    select cust;

foreach (var cust in query)
    Console.WriteLine("id = {0}, City = {1}", cust.CustomerID, cust.City);

Each database table is represented as a Table collection available by way of the GetTable method, by using the entity class to identify it.

Example

Best practice is to declare a strongly typed DataContext instead of relying on the basic DataContext class and the GetTable method. A strongly typed DataContext declares all Table collections as members of the context, as in the following example.

C#
public partial class Northwind : DataContext
{
    public Table<Customer> Customers;
    public Table<Order> Orders;
    public Northwind(string connection) : base(connection) { }
}

You can then express the query for customers from London more simply as:

C#
Northwnd db = new Northwnd(@"c:\Northwnd.mdf");
var query =
    from cust in db.Customers
    where cust.City == "London"
    select cust;
foreach (var cust in query)
    Console.WriteLine("id = {0}, City = {1}", cust.CustomerID,
        cust.City);

See also

How to: Directly Execute SQL Commands

Assuming a DataContext connection, you can use ExecuteCommand to execute SQL commands that do not return objects.

Example

The following example causes SQL Server to increase UnitPrice by 1.00.

C#
db.ExecuteCommand("UPDATE Products SET UnitPrice = UnitPrice + 1.00");

See also

How to: Reuse a Connection Between an ADO.NET Command and a DataContext

Because LINQ to SQL is a part of the ADO.NET family of technologies and is based on services provided by ADO.NET, you can reuse a connection between an ADO.NET command and a DataContext.

Example

The following example shows how to reuse the same connection between an ADO.NET command and the DataContext.

C#
string connString = @"Data Source=.\SQLEXPRESS;AttachDbFilename=c:\northwind.mdf;
    Integrated Security=True; Connect Timeout=30; User Instance=True";
SqlConnection nwindConn = new SqlConnection(connString);
nwindConn.Open();

Northwnd interop_db = new Northwnd(nwindConn);

SqlTransaction nwindTxn = nwindConn.BeginTransaction();

try
{
    SqlCommand cmd = new SqlCommand(
        "UPDATE Products SET QuantityPerUnit = 'single item' WHERE ProductID = 3");
    cmd.Connection = nwindConn;
    cmd.Transaction = nwindTxn;
    cmd.ExecuteNonQuery();

    interop_db.Transaction = nwindTxn;

    Product prod1 = interop_db.Products
        .First(p => p.ProductID == 4);
    Product prod2 = interop_db.Products
        .First(p => p.ProductID == 5);
    prod1.UnitsInStock -= 3;
    prod2.UnitsInStock -= 5;

    interop_db.SubmitChanges();

    nwindTxn.Commit();
}
catch (Exception e)
{
    Console.WriteLine(e.Message);
    Console.WriteLine("Error submitting changes... all changes rolled back.");
}

nwindConn.Close();

See also

Querying the Database

This group of topics describes how to develop and execute queries in LINQ to SQL projects.

In This Section

How to: Query for Information
Briefly shows how LINQ to SQL queries are basically the same as LINQ queries generally.

How to: Retrieve Information As Read-Only
Describes how to increase query performance when no change to the data is planned.

How to: Control How Much Related Data Is Retrieved
Describes how to control which related data is retrieved together with the main target.

How to: Filter Related Data
Describes how to retrieve related data by using a sub-query.

How to: Turn Off Deferred Loading
Describes how to turn off deferred loading.

How to: Directly Execute SQL Queries
Describes how to submit queries by using SQL language.

How to: Store and Reuse Queries
Describes how to compile a query one time but use it multiple times with different parameters.

How to: Handle Composite Keys in Queries
Describes how to include more than one column in a query where the operator takes only a single argument.

How to: Retrieve Many Objects At Once
Describes how to use LoadWith.

How to: Filter at the DataContext Level
Describes another use of LoadWith.

Query Examples
Provides many examples of queries.

How to: Query for Information

Queries in LINQ to SQL use the same syntax as queries in LINQ. The only difference is that the objects referenced in LINQ to SQL queries are mapped to elements in a database. For more information, see Introduction to LINQ Queries (C#).

LINQ to SQL translates the queries you write into equivalent SQL queries and sends them to the server for processing.

Some features of LINQ queries might need special attention in LINQ to SQL applications. For more information, see Query Concepts.

Example

The following query asks for a list of customers from London. In this example, Customers is a table in the Northwind sample database.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

// Query for customers in London.
IQueryable<Customer> custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

See also

How to: Retrieve Information As Read-Only

When you do not intend to change the data, you can increase the performance of queries by seeking read-only results.

You implement read-only processing by setting ObjectTrackingEnabled to false.

Note

When ObjectTrackingEnabled is set to false, DeferredLoadingEnabled is implicitly set to false.

Example

The following code retrieves a read-only collection of employee hire dates.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

db.ObjectTrackingEnabled = false;
IOrderedQueryable<Employee> hireQuery =
    from emp in db.Employees
    orderby emp.HireDate
    select emp;

foreach (Employee empObj in hireQuery)
{
    Console.WriteLine("EmpID = {0}, Date Hired = {1}",
        empObj.EmployeeID, empObj.HireDate);
}

See also

How to: Control How Much Related Data Is Retrieved

Use the LoadWith method to specify which data related to your main target should be retrieved at the same time. For example, if you know you will need information about customers' orders, you can use LoadWith to make sure that the order information is retrieved at the same time as the customer information. This approach results in only one trip to the database for both sets of information.

Note

You can retrieve data related to the main target of your query by retrieving a cross-product as one large projection, such as retrieving orders when you target customers. But this approach often has disadvantages. For example, the results are just projections and not entities that can be changed and persisted by LINQ to SQL. And you can be retrieving lots of data that you do not need.

Example

In the following example, all the Orders for all the Customers who are located in London are retrieved when the query is executed. As a result, successive access to the Orders property on a Customer object does not trigger a new database query.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");
DataLoadOptions dlo = new DataLoadOptions();
dlo.LoadWith<Customer>(c => c.Orders);
db.LoadOptions = dlo;

var londonCustomers =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

foreach (var custObj in londonCustomers)
{
    Console.WriteLine(custObj.CustomerID);
}

See also

How to: Filter Related Data

Use the AssociateWith method to specify sub-queries to limit the amount of retrieved data.

Example

In the following example, the AssociateWith method limits the Orders retrieved to those that have not been shipped today. Without this approach, all Orders would have been retrieved even though only a subset is desired.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");
DataLoadOptions dlo = new DataLoadOptions();
dlo.AssociateWith<Customer>(c => c.Orders.Where(p => p.ShippedDate != DateTime.Today));
db.LoadOptions = dlo;
var custOrderQuery = 
    from cust in db.Customers
    where cust.City == "London"
    select cust;

foreach (Customer custObj in custOrderQuery)
{
    Console.WriteLine(custObj.CustomerID);
    foreach (Order ord in custObj.Orders)
    {
        Console.WriteLine("\t {0}",ord.OrderDate);
    } 
}

See also

How to: Turn Off Deferred Loading

You can turn off deferred loading by setting DeferredLoadingEnabled to false. For more information, see Deferred versus Immediate Loading.

Note

Deferred loading is turned off by implication when object tracking is turned off. For more information, see How to: Retrieve Information As Read-Only.

Example

The following example shows how to turn off deferred loading by setting DeferredLoadingEnabled to false.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");
db.DeferredLoadingEnabled = false;

DataLoadOptions ds = new DataLoadOptions();
ds.LoadWith<Customer>(c => c.Orders);
ds.LoadWith<Order>(o => o.OrderDetails);
db.LoadOptions = ds;

var custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

foreach (Customer custObj in custQuery)
{
    Console.WriteLine("Customer ID: {0}", custObj.CustomerID);
    foreach (Order ord in custObj.Orders)
    {
        Console.WriteLine("\tOrder ID: {0}", ord.OrderID);
        foreach (OrderDetail detail in ord.OrderDetails)
        {
            Console.WriteLine("\t\tProduct ID: {0}", detail.ProductID);
        }
    }
}

See also

How to: Directly Execute SQL Queries

LINQ to SQL translates the queries you write into parameterized SQL queries (in text form) and sends them to the SQL server for processing.

SQL cannot execute the variety of methods that might be locally available to your application. LINQ to SQL tries to convert these local methods to equivalent operations and functions that are available inside the SQL environment. Most methods and operators on .NET Framework built-in types have direct translations to SQL commands. Some can be produced from the functions that are available. Those that cannot be produced generate run-time exceptions. For more information, see SQL-CLR Type Mapping.

In cases where a LINQ to SQL query is insufficient for a specialized task, you can use the ExecuteQuery method to execute a SQL query, and then convert the result of your query directly into objects.

Example

In the following example, assume that the data for the Customer class is spread over two tables (customer1 and customer2). The query returns a sequence of Customer objects.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");
IEnumerable<Customer> results = db.ExecuteQuery<Customer>
(@"SELECT c1.custid as CustomerID, c2.custName as ContactName
    FROM customer1 as c1, customer2 as c2
    WHERE c1.custid = c2.custid"
);

As long as the column names in the tabular results match column properties of your entity class, LINQ to SQL creates your objects out of any SQL query.

Example

The ExecuteQuery method also allows for parameters. Use code such as the following to execute a parameterized query.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");
IEnumerable<Customer> results = db.ExecuteQuery<Customer>
    ("SELECT contactname FROM customers WHERE city = {0}",
    "London");

The parameters are expressed in the query text by using the same curly notation used by Console.WriteLine() and String.Format(). In fact, String.Format() is actually called on the query string you provide, substituting the curly braced parameters with generated parameter names such as @p0, @p1 …, @p(n).

See also

How to: Store and Reuse Queries

When you have an application that executes structurally similar queries many times, you can often increase performance by compiling the query one time and executing it several times with different parameters. For example, an application might have to retrieve all the customers who are in a particular city, where the city is specified at runtime by the user in a form. LINQ to SQL supports the use of compiled queries for this purpose.

Note

This pattern of usage represents the most common use for compiled queries. Other approaches are possible. For example, compiled queries can be stored as static members on a partial class that extends the code generated by the designer.

Example

In many scenarios you might want to reuse the queries across thread boundaries. In such cases, storing the compiled queries in static variables is especially effective. The following code example assumes a Queries class designed to store compiled queries, and assumes a Northwind class that represents a strongly typed DataContext.

C#
public static Func<Northwnd, string, IQueryable<Customer>>
    CustomersByCity =
        CompiledQuery.Compile((Northwnd db, string city) =>
            from c in db.Customers where c.City == city select c);

public static Func<Northwnd, string, IQueryable<Customer>>
    CustomersById = CompiledQuery.Compile((Northwnd db,
    string id) => db.Customers.Where(c => c.CustomerID == id));
C#
// The following example invokes such a compiled query in the main
// program.

public IEnumerable<Customer> GetCustomersByCity(string city)
{
    var myDb = GetNorthwind();
    return Queries.CustomersByCity(myDb, city);
}

Example

You cannot currently store (in static variables) queries that return an anonymous type, because type has no name to provide as a generic argument. The following example shows how you can work around the issue by creating a type that can represent the result, and then use it as a generic argument.

C#
class SimpleCustomer
{
    public string ContactName { get; set; }
}

class Queries2
{
    public static Func<Northwnd, string, IEnumerable<SimpleCustomer>> CustomersByCity =
        CompiledQuery.Compile<Northwnd, string, IEnumerable<SimpleCustomer>>(
        (Northwnd db, string city) =>
        from c in db.Customers
        where c.City == city
        select new SimpleCustomer { ContactName = c.ContactName });
}

See also

How to: Handle Composite Keys in Queries

Some operators can take only one argument. If your argument must include more than one column from the database, you must create an anonymous type to represent the combination.

Example

The following example shows a query that invokes the GroupBy operator, which can take only one key argument.

C#
        var query =
from cust in db.Customers
group cust.ContactName by new { City = cust.City, Region = cust.Region };

        foreach (var grp in query)
        {
            Console.WriteLine("\nLocation Key: {0}", grp.Key);
            foreach (var listing in grp)
            {
                Console.WriteLine("\t{0}", listing);
            }
        }

Example

The same situation pertains to joins, as in the following example:

C#
        var query =
from ord in db.Orders
from prod in db.Products
join det in db.OrderDetails
    on new { ord.OrderID, prod.ProductID } equals new { det.OrderID, det.ProductID }
    into details
from det in details
select new { ord.OrderID, prod.ProductID, det.UnitPrice };

See also

How to: Retrieve Many Objects At Once

You can retrieve many objects in one query by using LoadWith.

Example

The following code uses the LoadWith method to retrieve both Customer and Order objects.

C#
Northwnd db = new Northwnd(@"northwnd.mdf");
DataLoadOptions ds = new DataLoadOptions();
ds.LoadWith<Customer>(c => c.Orders);
ds.LoadWith<Order>(o => o.OrderDetails);
db.LoadOptions = ds;

var custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

foreach (Customer custObj in custQuery)
{
    Console.WriteLine("Customer ID: {0}", custObj.CustomerID);
    foreach (Order ord in custObj.Orders)
    {
        Console.WriteLine("\tOrder ID: {0}", ord.OrderID);
        foreach (OrderDetail detail in ord.OrderDetails)
        {
            Console.WriteLine("\t\tProduct ID: {0}", detail.ProductID);
        }
    }
}

See also

How to: Filter at the DataContext Level

You can filter EntitySets at the DataContext level. Such filters apply to all queries done with that DataContext instance.

Example

In the following example, DataLoadOptions.AssociateWith(LambdaExpression) is used to filter the pre-loaded orders for customers by ShippedDate.

C#
Northwnd db = new Northwnd(@"northwnd.mdf");
// Preload Orders for Customer.
// One directive per relationship to be preloaded.
DataLoadOptions ds = new DataLoadOptions();
ds.LoadWith<Customer>(c => c.Orders);
ds.AssociateWith<Customer>
    (c => c.Orders.Where(p => p.ShippedDate != DateTime.Today));
db.LoadOptions = ds;

var custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

foreach (Customer custObj in custQuery)
{
    Console.WriteLine("Customer ID: {0}", custObj.CustomerID);
    foreach (Order ord in custObj.Orders)
    {
        Console.WriteLine("\tOrder ID: {0}", ord.OrderID);
        foreach (OrderDetail detail in ord.OrderDetails)
        {
            Console.WriteLine("\t\tProduct ID: {0}", detail.ProductID);
        }
    }
}

See also

Query Examples

This section provides Visual Basic and C# examples of typical LINQ to SQL queries. Developers using Visual Studio can find many more examples in a sample solution available in the Samples section. For more information, see Samples.

Important

db is often used in code examples in LINQ to SQL documentation. db is assumed to be an instance of a Northwind class, which inherits from DataContext.

In This Section

Aggregate Queries
Describes how to use Average, Count, and so forth.

Return the First Element in a Sequence
Provides examples of using First.

Return Or Skip Elements in a Sequence
Provides examples of using Take and Skip.

Sort Elements in a Sequence
Provides examples of using OrderBy.

Group Elements in a Sequence
Provides examples of using GroupBy.

Eliminate Duplicate Elements from a Sequence
Provides examples of using Distinct.

Determine if Any or All Elements in a Sequence Satisfy a Condition
Provides examples of using All and Any.

Concatenate Two Sequences
Provides examples of using Concat.

Return the Set Difference Between Two Sequences
Provides examples of using Except.

Return the Set Intersection of Two Sequences
Provides examples of using Intersect.

Return the Set Union of Two Sequences
Provides examples of using Union.

Convert a Sequence to an Array
Provides examples of using ToArray.

Convert a Sequence to a Generic List
Provides examples of using ToList.

Convert a Type to a Generic IEnumerable
Provides examples of using AsEnumerable.

Formulate Joins and Cross-Product Queries
Provides examples of using foreign-key navigation in the from, where, and select clauses.

Formulate Projections
Provides examples of combining select with other features (for example, anonymous types) to form query projections.

Related Sections

Standard Query Operators Overview (C#)
Explains the concept of standard query operators using C#.

Standard Query Operators Overview (Visual Basic)
Explains the concept of standard query operators using Visual Basic.

Query Concepts
Explains how LINQ to SQL uses concepts that apply to queries.

Programming Guide
Provides a portal to topics that explain programming concepts related to LINQ to SQL.

Aggregate Queries

LINQ to SQL supports the Average, Count, Max, Min, and Sum aggregate operators. Note the following characteristics of aggregate operators in LINQ to SQL:

  • Aggregate queries are executed immediately.

    For more information, see Introduction to LINQ Queries (C#).

  • Aggregate queries typically return a number instead of a collection.

    For more information, see Aggregation Operations.

  • You cannot call aggregates against anonymous types.

The examples in the following topics derive from the Northwind sample database. For more information, see Downloading Sample Databases.

In This Section

Return the Average Value From a Numeric Sequence
Demonstrates how to use the Average operator.

Count the Number of Elements in a Sequence
Demonstrates how to use the Count operator.

Find the Maximum Value in a Numeric Sequence
Demonstrates how to use the Max operator.

Find the Minimum Value in a Numeric Sequence
Demonstrates how to use the Min operator.

Compute the Sum of Values in a Numeric Sequence
Demonstrates how to use the Sum operator.

Related Sections

Query Examples
Provides links to LINQ to SQL queries in Visual Basic and C#.

Query Concepts
Provides links to topics that explain concepts for designing LINQ queries in LINQ to SQL.

Introduction to LINQ Queries (C#)
Explains how queries work in LINQ.

Return the Average Value From a Numeric Sequence

The Average operator computes the average of a sequence of numeric values.

Note

The LINQ to SQL translation of Average of integer values is computed as an integer, not as a double.

Example

The following example returns the average of Freight values in the Orders table.

Results from the sample Northwind database would be 78.2442.

C#
System.Nullable<Decimal> averageFreight =
    (from ord in db.Orders
    select ord.Freight)
    .Average();

Console.WriteLine(averageFreight);

Example

The following example returns the average of the unit price of all Products in the Products table.

Results from the sample Northwind database would be 28.8663.

C#
System.Nullable<Decimal> averageUnitPrice =
    (from prod in db.Products
    select prod.UnitPrice)
    .Average();

Console.WriteLine(averageUnitPrice);

Example

The following example uses the Average operator to find those Products whose unit price is higher than the average unit price of the category it belongs to. The example then displays the results in groups.

Note that this example requires the use of the var keyword in C#, because the return type is anonymous.

C#
var priceQuery =
    from prod in db.Products
    group prod by prod.CategoryID into grouping
    select new
    {
        grouping.Key,
        ExpensiveProducts =
            from prod2 in grouping
            where prod2.UnitPrice > grouping.Average(prod3 =>
                prod3.UnitPrice)
        select prod2
    };

foreach (var grp in priceQuery)
{
    Console.WriteLine(grp.Key);
    foreach (var listing in grp.ExpensiveProducts)
    {
        Console.WriteLine(listing.ProductName);
    }
}

If you run this query against the Northwind sample database, the results should resemble of the following:

1

Côte de Blaye

Ipoh Coffee

2

Grandma's Boysenberry Spread

Northwoods Cranberry Sauce

Sirop d'érable

Vegie-spread

3

Sir Rodney's Marmalade

Gumbär Gummibärchen

Schoggi Schokolade

Tarte au sucre

4

Queso Manchego La Pastora

Mascarpone Fabioli

Raclette Courdavault

Camembert Pierrot

Gudbrandsdalsost

Mozzarella di Giovanni

5

Gustaf's Knäckebröd

Gnocchi di nonna Alice

Wimmers gute Semmelknödel

6

Mishi Kobe Niku

Thüringer Rostbratwurst

7

Rössle Sauerkraut

Manjimup Dried Apples

8

Ikura

Carnarvon Tigers

Nord-Ost Matjeshering

Gravad lax

See also

Count the Number of Elements in a Sequence

Use the Count operator to count the number of elements in a sequence.

Running this query against the Northwind sample database produces an output of 91.

Example

The following example counts the number of Customers in the database.

C#
System.Int32 customerCount = db.Customers.Count();
Console.WriteLine(customerCount);

Example

The following example counts the number of products in the database that have not been discontinued.

Running this example against the Northwind sample database produces an output of 69.

C#
System.Int32 notDiscontinuedCount =
    (from prod in db.Products
    where !prod.Discontinued
    select prod)
    .Count();

Console.WriteLine(notDiscontinuedCount);

See also

Find the Maximum Value in a Numeric Sequence

Use the Max operator to find the highest value in a sequence of numeric values.

Example

The following example finds the latest date of hire for any employee.

If you run this query against the sample Northwind database, the output is: 11/15/1994 12:00:00 AM.

C#
System.Nullable<DateTime> latestHireDate =
    (from emp in db.Employees
    select emp.HireDate)
    .Max();

Console.WriteLine(latestHireDate);

Example

The following example finds the most units in stock for any product.

If you run this example against the sample Northwind database, the output is: 125.

C#
System.Nullable<Int16> maxUnitsInStock =
    (from prod in db.Products
    select prod.UnitsInStock)
    .Max();

Console.WriteLine(maxUnitsInStock);

Example

The following example uses Max to find the Products that have the highest unit price in each category. The output then lists the results by category.

C#
var maxQuery =
    from prod in db.Products
    group prod by prod.CategoryID into grouping
    select new
    {
        grouping.Key,
        MostExpensiveProducts =
            from prod2 in grouping
            where prod2.UnitPrice == grouping.Max(prod3 =>
                prod3.UnitPrice)
            select prod2
    };

foreach (var grp in maxQuery)
{
    Console.WriteLine(grp.Key);
    foreach (var listing in grp.MostExpensiveProducts)
    {
        Console.WriteLine(listing.ProductName);
    }
}

If you run the previous query against the Northwind sample database, your results will resemble the following:

1

Côte de Blaye

2

Vegie-spread

3

Sir Rodney's Marmalade

4

Raclette Courdavault

5

Gnocchi di nonna Alice

6

Thüringer Rostbratwurst

7

Manjimup Dried Apples

8

Carnarvon Tigers

See also

Find the Minimum Value in a Numeric Sequence

Use the Min operator to return the minimum value from a sequence of numeric values.

Example

The following example finds the lowest unit price of any product.

If you run this query against the Northwind sample database, the output is: 2.5000.

C#
System.Nullable<Decimal> lowestUnitPrice =
    (from prod in db.Products
    select prod.UnitPrice)
    .Min();

Console.WriteLine(lowestUnitPrice);

Example

The following example finds the lowest freight amount for any order.

If you run this query against the Northwind sample database, the output is: 0.0200.

C#
System.Nullable<Decimal> lowestFreight =
    (from ord in db.Orders
    select ord.Freight)
    .Min();

Console.WriteLine(lowestFreight);

Example

The following example uses Min to find the Products that have the lowest unit price in each category. The output is arranged by category.

C#
var minQuery =
    from prod in db.Products
    group prod by prod.CategoryID into grouping
    select new
    {
        grouping.Key,
        LeastExpensiveProducts =
            from prod2 in grouping
            where prod2.UnitPrice == grouping.Min(prod3 =>
            prod3.UnitPrice)
            select prod2
    };

foreach (var grp in minQuery)
{
    Console.WriteLine(grp.Key);
    foreach (var listing in grp.LeastExpensiveProducts)
    {
        Console.WriteLine(listing.ProductName);
    }
}

If you run the previous query against the Northwind sample database, your results will resemble the following:

1

Guaraná Fantástica

2

Aniseed Syrup

3

Teatime Chocolate Biscuits

4

Geitost

5

Filo Mix

6

Tourtière

7

Longlife Tofu

8

Konbu

See also

Compute the Sum of Values in a Numeric Sequence

Use the Sum operator to compute the sum of numeric values in a sequence.

Note the following characteristics of the Sum operator in LINQ to SQL:

  • The Standard Query Operator aggregate operator Sum evaluates to zero for an empty sequence or a sequence that contains only nulls. In LINQ to SQL, the semantics of SQL are left unchanged. For this reason, Sum evaluates to null instead of to zero for an empty sequence or for a sequence that contains only nulls.

  • SQL limitations on intermediate results apply to aggregates in LINQ to SQL. Sum of 32-bit integer quantities is not computed by using 64-bit results, and overflow can occur for the LINQ to SQL translation of Sum. This possibility exists even if the Standard Query Operator implementation does not cause an overflow for the corresponding in-memory sequence.

Example

The following example finds the total freight of all orders in the Order table.

If you run this query against the Northwind sample database, the output is: 64942.6900.

C#
System.Nullable<Decimal> totalFreight =
    (from ord in db.Orders
    select ord.Freight)
    .Sum();

Console.WriteLine(totalFreight);

Example

The following example finds the total number of units on order for all products.

If you run this query against the Northwind sample database, the output is: 780.

Note that you must cast short types (for example, UnitsOnOrder) because Sum has no overload for short types.

C#
System.Nullable<long> totalUnitsOnOrder =
    (from prod in db.Products
    select (long)prod.UnitsOnOrder)
    .Sum();

Console.WriteLine(totalUnitsOnOrder);

See also

Return the First Element in a Sequence

Use the First operator to return the first element in a sequence. Queries that use First are executed immediately.

Note

LINQ to SQL does not support the Last operator.

Example

The following code finds the first Shipper in a table:

If you run this query against the Northwind sample database, the results are

ID = 1, Company = Speedy Express.

C#
Shipper shipper = db.Shippers.First();
Console.WriteLine("ID = {0}, Company = {1}", shipper.ShipperID,
    shipper.CompanyName);

Example

The following code finds the single Customer that has the CustomerID BONAP.

If you run this query against the Northwind sample database, the results are ID = BONAP, Contact = Laurence Lebihan.

C#
Customer custQuery =
    (from custs in db.Customers
    where custs.CustomerID == "BONAP"
    select custs)
    .First();

Console.WriteLine("ID = {0}, Contact = {1}", custQuery.CustomerID,
    custQuery.ContactName);

See also

Return Or Skip Elements in a Sequence

Use the Take operator to return a given number of elements in a sequence and then skip over the remainder.

Use the Skip operator to skip over a given number of elements in a sequence and then return the remainder.

Note

Take and Skip have certain limitations when they are used in queries against SQL Server 2000. For more information, see the "Skip and Take Exceptions in SQL Server 2000" entry in Troubleshooting.

LINQ to SQL translates Skip by using a subquery with the SQL NOT EXISTS clause. This translation has the following limitations:

  • The argument must be a set. Multisets are not supported, even if ordered.

  • The generated query can be much more complex than the query generated for the base query on which Skip is applied. This complexity can cause decrease in performance or even a time-out.

Example

The following example uses Take to select the first five Employees hired. Note that the collection is first sorted by HireDate.

C#
IQueryable<Employee> firstHiredQuery =
    (from emp in db.Employees
    orderby emp.HireDate
    select emp)
    .Take(5);

foreach (Employee empObj in firstHiredQuery)
{
    Console.WriteLine("{0}, {1}", empObj.EmployeeID,
        empObj.HireDate);
}

Example

The following example uses Skip to select all except the 10 most expensive Products.

C#
IQueryable<Product> lessExpensiveQuery =
    (from prod in db.Products
    orderby prod.UnitPrice descending
    select prod)
    .Skip(10);

foreach (Product prodObj in lessExpensiveQuery)
{
    Console.WriteLine(prodObj.ProductName);
}

Example

The following example combines the Skip and Take methods to skip the first 50 records and then return the next 10.

C#
var custQuery2 =
    (from cust in db.Customers
    orderby cust.ContactName
    select cust)
    .Skip(50).Take(10);

foreach (var custRecord in custQuery2)
{
    Console.WriteLine(custRecord.ContactName);
}

Take and Skip operations are well defined only against ordered sets. The semantics for unordered sets or multisets is undefined.

Because of the limitations on ordering in SQL, LINQ to SQL tries to move the ordering of the argument of the Take or Skip operator to the result of the operator.

Note

Translation is different for SQL Server 2000 and SQL Server 2005. If you plan to use Skip with a query of any complexity, use SQL Server 2005.

Consider the following LINQ to SQL query for SQL Server 2000:

C#
IQueryable<Customer> custQuery3 =
    (from custs in db.Customers
     where custs.City == "London"
     orderby custs.CustomerID
     select custs)
    .Skip(1).Take(1);

foreach (var custObj in custQuery3)
{
    Console.WriteLine(custObj.CustomerID);
}

LINQ to SQL moves the ordering to the end in the SQL code, as follows:

SELECT TOP 1 [t0].[CustomerID], [t0].[CompanyName],  
FROM [Customers] AS [t0]  
WHERE (NOT (EXISTS(  
    SELECT NULL AS [EMPTY]  
    FROM (  
        SELECT TOP 1 [t1].[CustomerID]  
        FROM [Customers] AS [t1]  
        WHERE [t1].[City] = @p0  
        ORDER BY [t1].[CustomerID]  
        ) AS [t2]  
    WHERE [t0].[CustomerID] = [t2].[CustomerID]  
    ))) AND ([t0].[City] = @p1)  
ORDER BY [t0].[CustomerID]  

When Take and Skip are chained together, all the specified ordering must be consistent. Otherwise, the results are undefined.

For non-negative, constant integral arguments based on the SQL specification, both Take and Skip are well-defined.

See also

Sort Elements in a Sequence

Use the OrderBy operator to sort a sequence according to one or more keys.

Note

LINQ to SQL is designed to support ordering by simple primitive types, such as string, int, and so on. It does not support ordering for complex multi-valued classes, such as anonymous types. It also does not support byte datatypes.

Example

The following example sorts Employees by date of hire.

C#
IOrderedQueryable<Employee> hireQuery =
    from emp in db.Employees
    orderby emp.HireDate
    select emp;

foreach (Employee empObj in hireQuery)
{
    Console.WriteLine("EmpID = {0}, Date Hired = {1}",
        empObj.EmployeeID, empObj.HireDate);
}

Example

The following example uses where to sort Orders shipped to London by freight.

C#
IOrderedQueryable<Order> freightQuery =
    from ord in db.Orders
    where ord.ShipCity == "London"
    orderby ord.Freight
    select ord;

foreach (Order ordObj in freightQuery)
{
    Console.WriteLine("Order ID = {0}, Freight = {1}",
        ordObj.OrderID, ordObj.Freight);
}

Example

The following example sorts Products by unit price from highest to lowest.

C#
IOrderedQueryable<Product> priceQuery =
    from prod in db.Products
    orderby prod.UnitPrice descending
    select prod;

foreach (Product prodObj in priceQuery)
{
    Console.WriteLine("Product ID = {0}, Unit Price = {1}",
        prodObj.ProductID, prodObj.UnitPrice);
}

Example

The following example uses a compound OrderBy to sort Customers by city and then by contact name.

C#
IOrderedQueryable<Customer> custQuery =
    from cust in db.Customers
    orderby cust.City, cust.ContactName
    select cust;

foreach (Customer custObj in custQuery)
{
    Console.WriteLine("City = {0}, Name = {1}", custObj.City,
        custObj.ContactName);
}

Example

The following example sorts Orders from EmployeeID 1 by ShipCountry, and then by highest to lowest freight.

C#
IOrderedQueryable<Order> ordQuery =
    from ord in db.Orders
    where ord.EmployeeID == 1
    orderby ord.ShipCountry, ord.Freight descending
    select ord;

foreach (Order ordObj in ordQuery)
{
    Console.WriteLine("Country = {0}, Freight = {1}",
        ordObj.ShipCountry, ordObj.Freight);
}

Example

The following example combines OrderBy, Max, and GroupBy operators to find the Products that have the highest unit price in each category, and then sorts the group by category id.

C#
var highPriceQuery =
    from prod in db.Products
    group prod by prod.CategoryID into grouping
    orderby grouping.Key
    select new
    {
        grouping.Key,
        MostExpensiveProducts =
            from prod2 in grouping
            where prod2.UnitPrice == grouping.Max(p3 => p3.UnitPrice)
            select prod2
    };

foreach (var prodObj in highPriceQuery)
{
    Console.WriteLine(prodObj.Key);
    foreach (var listing in prodObj.MostExpensiveProducts)
    {
        Console.WriteLine(listing.ProductName);
    }
}

If you run the previous query against the Northwind sample database, the results will resemble the following:

1

Côte de Blaye

2

Vegie-spread

3

Sir Rodney's Marmalade

4

Raclette Courdavault

5

Gnocchi di nonna Alice

6

Thüringer Rostbratwurst

7

Manjimup Dried Apples

8

Carnarvon Tigers

See also

Group Elements in a Sequence

The GroupBy operator groups the elements of a sequence. The following examples use the Northwind database.

Note

Null column values in GroupBy queries can sometimes throw an InvalidOperationException. For more information, see the "GroupBy InvalidOperationException" section of Troubleshooting.

Example

The following example partitions Products by CategoryID.

C#
IQueryable<IGrouping<Int32?, Product>> prodQuery =
    from prod in db.Products
    group prod by prod.CategoryID into grouping
    select grouping;

foreach (IGrouping<Int32?, Product> grp in prodQuery)
{
    Console.WriteLine("\nCategoryID Key = {0}:", grp.Key);
    foreach (Product listing in grp)
    {
        Console.WriteLine("\t{0}", listing.ProductName);
    }
}

Example

The following example uses Max to find the maximum unit price for each CategoryID.

C#
var q =
    from p in db.Products
    group p by p.CategoryID into g
    select new
    {
        g.Key,
        MaxPrice = g.Max(p => p.UnitPrice)
    };

Example

The following example uses Average to find the average UnitPrice for each CategoryID.

C#
var q2 =
    from p in db.Products
    group p by p.CategoryID into g
    select new
    {
        g.Key,
        AveragePrice = g.Average(p => p.UnitPrice)
    };

Example

The following example uses Sum to find the total UnitPrice for each CategoryID.

C#
var priceQuery =
    from prod in db.Products
    group prod by prod.CategoryID into grouping
    select new
    {
        grouping.Key,
        TotalPrice = grouping.Sum(p => p.UnitPrice)
    };

foreach (var grp in priceQuery)
{
    Console.WriteLine("Category = {0}, Total price = {1}",
        grp.Key, grp.TotalPrice);
}

Example

The following example uses Count to find the number of discontinued Products in each CategoryID.

C#
var disconQuery =
    from prod in db.Products
    group prod by prod.CategoryID into grouping
    select new
    {
        grouping.Key,
        NumProducts = grouping.Count(p => p.Discontinued)
    };

foreach (var prodObj in disconQuery)
{
    Console.WriteLine("CategoryID = {0}, Discontinued# = {1}",
        prodObj.Key, prodObj.NumProducts);
}

Example

The following example uses a following where clause to find all categories that have at least 10 products.

C#
var prodCountQuery =
    from prod in db.Products
    group prod by prod.CategoryID into grouping
    where grouping.Count() >= 10
    select new
    {
        grouping.Key,
        ProductCount = grouping.Count()
    };

foreach (var prodCount in prodCountQuery)
{
    Console.WriteLine("CategoryID = {0}, Product count = {1}",
        prodCount.Key, prodCount.ProductCount);
}

Example

The following example groups products by CategoryID and SupplierID.

C#
var prodQuery =
    from prod in db.Products
    group prod by new
    {
        prod.CategoryID,
        prod.SupplierID
    }
    into grouping
    select new { grouping.Key, grouping };

foreach (var grp in prodQuery)
{
    Console.WriteLine("\nCategoryID {0}, SupplierID {1}",
        grp.Key.CategoryID, grp.Key.SupplierID);
    foreach (var listing in grp.grouping)
    {
        Console.WriteLine("\t{0}", listing.ProductName);
    }
}

Example

The following example returns two sequences of products. The first sequence contains products with unit price less than or equal to 10. The second sequence contains products with unit price greater than 10.

C#
var priceQuery =
    from prod in db.Products
    group prod by new
    {
        Criterion = prod.UnitPrice > 10
    }
    into grouping
    select grouping;

foreach (var prodObj in priceQuery)
{
    if (prodObj.Key.Criterion == false)
        Console.WriteLine("Prices 10 or less:");
    else
        Console.WriteLine("\nPrices greater than 10");
    foreach (var listing in prodObj)
    {
        Console.WriteLine("{0}, {1}", listing.ProductName,
            listing.UnitPrice);
    }
}

Example

The GroupBy operator can take only a single key argument. If you need to group by more than one key, you must create an anonymous type, as in the following example:

C#
var custRegionQuery =
    from cust in db.Customers
    group cust.ContactName by new { City = cust.City, Region = cust.Region };

foreach (var grp in custRegionQuery)
{
    Console.WriteLine("\nLocation Key: {0}", grp.Key);
    foreach (var listing in grp)
    {
        Console.WriteLine("\t{0}", listing);
    }
}

See also

Eliminate Duplicate Elements from a Sequence

Use the Distinct operator to eliminate duplicate elements from a sequence.

Example

The following example uses Distinct to select a sequence of the unique cities that have customers.

C#
IQueryable<String> cityQuery =
    (from cust in db.Customers
    select cust.City).Distinct();

foreach (String cityString in cityQuery)
{
    Console.WriteLine(cityString);
}

See also

Determine if Any or All Elements in a Sequence Satisfy a Condition

The All operator returns true if all elements in a sequence satisfy a condition.

The Any operator returns true if any element in a sequence satisfies a condition.

Example

The following example returns a sequence of customers that have at least one order. The Where/where clause evaluates to true if the given Customer has any Order.

C#
var OrdersQuery =
    from cust in db.Customers
    where cust.Orders.Any()
    select cust;

Example

The following Visual Basic code determines the list of customers who have not placed orders, and ensures that for every customer in that list, a contact name is provided.

VB
Public Sub ContactsAvailable()
    Dim db As New Northwnd("c:\northwnd.mdf")
    Dim result = _
        (From cust In db.Customers _
        Where Not cust.Orders.Any() _
        Select cust).All(AddressOf ContactAvailable)

    If result Then
        Console.WriteLine _
    ("All of the customers who have made no orders have a contact name")
    Else
        Console.WriteLine _
    ("Some customers who have made no orders have no contact name")
    End If
End Sub

Function ContactAvailable(ByVal contact As Object) As Boolean
    Dim cust As Customer = CType(contact, Customer)
    Return (cust.ContactTitle Is Nothing OrElse _
        cust.ContactTitle.Trim().Length = 0)
End Function

Example

The following C# example returns a sequence of customers whose orders have a ShipCity beginning with "C". Also included in the return are customers who have no orders. (By design, the All operator returns true for an empty sequence.) Customers with no orders are eliminated in the console output by using the Count operator.

C#
var custEmpQuery =
    from cust in db.Customers
    where cust.Orders.All(o => o.ShipCity.StartsWith("C"))
    orderby cust.CustomerID
    select cust;

foreach (Customer custObj in custEmpQuery)
{
    if (custObj.Orders.Count > 0)
        Console.WriteLine("CustomerID: {0}", custObj.CustomerID);
    foreach (Order ordObj in custObj.Orders)
    {
        Console.WriteLine("\t OrderID: {0}; ShipCity: {1}",
            ordObj.OrderID, ordObj.ShipCity);
    }
}

See also

Concatenate Two Sequences

Use the Concat operator to concatenate two sequences.

The Concat operator is defined for ordered multisets where the orders of the receiver and the argument are the same.

Ordering in SQL is the final step before results are produced. For this reason, the Concat operator is implemented by using UNION ALL and does not preserve the order of its arguments. To make sure ordering is correct in the results, make sure to explicitly order the results.

Example

This example uses Concat to return a sequence of all Customer and Employee telephone and fax numbers.

C#
IQueryable<String> custQuery =
    (from cust in db.Customers
    select cust.Phone)
    .Concat
    (from cust in db.Customers
    select cust.Fax)
    .Concat
    (from emp in db.Employees
    select emp.HomePhone)
;

foreach (var custData in custQuery)
{
    Console.WriteLine(custData);
}

Example

This example uses Concat to return a sequence of all Customer and Employee name and telephone number mappings.

C#
var infoQuery =
    (from cust in db.Customers
    select new { Name = cust.CompanyName, cust.Phone }
    )
   .Concat
       (from emp in db.Employees
       select new
       {
           Name = emp.FirstName + " " + emp.LastName,
           Phone = emp.HomePhone
       }
       );

foreach (var infoData in infoQuery)
{
    Console.WriteLine("Name = {0}, Phone = {1}",
        infoData.Name, infoData.Phone);
}

See also

Return the Set Difference Between Two Sequences

Use the Except operator to return the set difference between two sequences.

Example

This example uses Except to return a sequence of all countries/regions in which Customers live but in which no Employees live.

C#
var infoQuery =
    (from cust in db.Customers
    select cust.Country)
    .Except
        (from emp in db.Employees
        select emp.Country)
;

In LINQ to SQL, the Except operation is well defined only on sets. The semantics for multisets is undefined.

See also

Return the Set Intersection of Two Sequences

Use the Intersect operator to return the set intersection of two sequences.

Example

This example uses Intersect to return a sequence of all countries/regions in which both Customers and Employees live.

C#
var infoQuery =
    (from cust in db.Customers
    select cust.Country)
    .Intersect
        (from emp in db.Employees
        select emp.Country)
;

In LINQ to SQL, the Intersect operation is well defined only on sets. The semantics for multisets is undefined.

See also

Return the Set Union of Two Sequences

Use the Union operator to return the set union of two sequences.

Example

This example uses Union to return a sequence of all countries/regions in which there are either Customers or Employees.

C#
var infoQuery =
    (from cust in db.Customers
    select cust.Country)
    .Union
        (from emp in db.Employees
        select emp.Country)
;

In LINQ to SQL, the Union operator is defined for multisets as the unordered concatenation of the multisets (effectively the result of the UNION ALL clause in SQL).

For more info and examples, see Queryable.Union.

See also

Convert a Sequence to an Array

Use ToArray to create an array from a sequence.

Example

The following example uses ToArray to immediately evaluate a query into an array and to get the third element.

C#
var custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;
    Customer[] qArray = custQuery.ToArray();

See also

Convert a Sequence to a Generic List

Use ToList to create a generic List from a sequence.

Example

The following sample uses ToList to immediately evaluate a query into a generic List<T>.

C#
var empQuery =
    from emp in db.Employees
    where emp.HireDate >= new DateTime(1994, 1, 1)
    select emp;
    List<Employee> qList = empQuery.ToList();

See also

Convert a Type to a Generic IEnumerable

Use AsEnumerable to return the argument typed as a generic IEnumerable.

Example

In this example, LINQ to SQL (using the default generic Query) would try to convert the query to SQL and execute it on the server. But the where clause references a user-defined client-side method (isValidProduct), which cannot be converted to SQL.

The solution is to specify the client-side generic IEnumerable<T> implementation of where to replace the generic IQueryable<T>. You do this by invoking the AsEnumerable operator.

C#
private bool isValidProduct(Product prod)
{
    return prod.ProductName.LastIndexOf('C') == 0;
}

void ConvertToIEnumerable()
{
    Northwnd db = new Northwnd(@"c:\test\northwnd.mdf");
    Program pg = new Program();
    var prodQuery =
        from prod in db.Products.AsEnumerable()
        where isValidProduct(prod)
        select prod;
}

See also

Formulate Joins and Cross-Product Queries

The following examples show how to combine results from multiple tables.

Example

The following example uses foreign key navigation in the From clause in Visual Basic (from clause in C#) to select all orders for customers in London.

C#
var infoQuery =
    from cust in db.Customers
    from ord in cust.Orders
    where cust.City == "London"
    select ord;

Example

The following example uses foreign key navigation in the Where clause in Visual Basic (where clause in C#) to filter for out-of-stock Products whose Supplier is in the United States.

C#
var infoQuery =
    from prod in db.Products
    where prod.Supplier.Country == "USA" && prod.UnitsInStock == 0
    select prod;

Example

The following example uses foreign key navigation in the From clause in Visual Basic (from clause in C#) to filter for employees in Seattle and to list their territories.

C#
var infoQuery =
    from emp in db.Employees
    from empterr in emp.EmployeeTerritories
    where emp.City == "Seattle"
    select new
    {
        emp.FirstName,
        emp.LastName,
        empterr.Territory.TerritoryDescription
    };

Example

The following example uses foreign key navigation in the Select clause in Visual Basic (select clause in C#) to filter for pairs of employees where one employee reports to the other and where both employees are from the same City.

C#
var infoQuery =
    from emp1 in db.Employees
    from emp2 in emp1.Employees
    where emp1.City == emp2.City
    select new
    {
        FirstName1 = emp1.FirstName,
        LastName1 = emp1.LastName,
        FirstName2 = emp2.FirstName,
        LastName2 = emp2.LastName,
        emp1.City
    };

Example

The following Visual Basic example looks for all customers and orders, makes sure that the orders are matched to customers, and guarantees that for every customer in that list, a contact name is provided.

VB
Dim q1 = From c In db.Customers, o In db.Orders _
    Where c.CustomerID = o.CustomerID _
    Select c.CompanyName, o.ShipRegion

' Note that because the O/R designer generates class
' hierarchies for database relationships for you,
' the following code has the same effect as the above
' and is shorter:

Dim q2 = From c In db.Customers, o In c.Orders _
    Select c.CompanyName, o.ShipRegion

For Each nextItem In q2
    Console.WriteLine("{0}   {1}", nextItem.CompanyName, _
        nextItem.ShipRegion)
Next

Example

The following example explicitly joins two tables and projects results from both tables.

C#
var q =
    from c in db.Customers
    join o in db.Orders on c.CustomerID equals o.CustomerID
        into orders
    select new { c.ContactName, OrderCount = orders.Count() };

Example

The following example explicitly joins three tables and projects results from each of them.

C#
var q =
    from c in db.Customers
    join o in db.Orders on c.CustomerID equals o.CustomerID
        into ords
    join e in db.Employees on c.City equals e.City into emps
    select new
    {
        c.ContactName,
        ords = ords.Count(),
        emps = emps.Count()
    };

Example

The following example shows how to achieve a LEFT OUTER JOIN by using DefaultIfEmpty(). The DefaultIfEmpty() method returns null when there is no Order for the Employee.

C#
var q =
    from e in db.Employees
    join o in db.Orders on e equals o.Employee into ords
        from o in ords.DefaultIfEmpty()
        select new { e.FirstName, e.LastName, Order = o };

Example

The following example projects a let expression resulting from a join.

C#
var q =
    from c in db.Customers
    join o in db.Orders on c.CustomerID equals o.CustomerID
        into ords
    let z = c.City + c.Country
        from o in ords
        select new { c.ContactName, o.OrderID, z };

Example

The following example shows a join with a composite key.

C#
var q =
    from o in db.Orders
    from p in db.Products
    join d in db.OrderDetails
        on new { o.OrderID, p.ProductID } equals new
    {
        d.OrderID,
        d.ProductID
    } into details
        from d in details
        select new { o.OrderID, p.ProductID, d.UnitPrice };

Example

The following example shows how to construct a join where one side is nullable and the other is not.

C#
var q =
    from o in db.Orders
    join e in db.Employees
        on o.EmployeeID equals (int?)e.EmployeeID into emps
        from e in emps
        select new { o.OrderID, e.FirstName };

See also

Formulate Projections

The following examples show how the select statement in C# and Select statement in Visual Basic can be combined with other features to form query projections.

Example

The following example uses the Select clause in Visual Basic (select clause in C#) to return a sequence of contact names for Customers.

C#
var nameQuery =
    from cust in db.Customers
    select cust.ContactName;

Example

The following example uses the Select clause in Visual Basic (select clause in C#) and anonymous types to return a sequence of contact names and telephone numbers for Customers.

C#
var infoQuery =
    from cust in db.Customers
    select new { cust.ContactName, cust.Phone };

Example

The following example uses the Select clause in Visual Basic (select clause in C#) and anonymous types to return a sequence of names and telephone numbers for employees. The FirstName and LastName fields are combined into a single field (Name), and the HomePhone field is renamed to Phone in the resulting sequence.

C#
var info2Query =
    from emp in db.Employees
    select new
    {
        Name = emp.FirstName + " " + emp.LastName,
        Phone = emp.HomePhone
    };

Example

The following example uses the Select clause in Visual Basic (select clause in C#) and anonymous types to return a sequence of all ProductIDs and a calculated value named HalfPrice. This value is set to the UnitPrice divided by 2.

C#
var specialQuery =
    from prod in db.Products
    select new { prod.ProductID, HalfPrice = prod.UnitPrice / 2 };

Example

The following example uses the Select clause in Visual Basic (select clause in C#) and a conditional statement to return a sequence of product name and product availability.

C#
var prodQuery =
    from prod in db.Products
    select new
    {
        prod.ProductName,
        Availability =
            prod.UnitsInStock - prod.UnitsOnOrder < 0
        ? "Out Of Stock" : "In Stock"
    };

Example

The following example uses a Visual Basic Select clause (select clause in C#) and a known type (Name) to return a sequence of the names of employees.

C#
public class Name
{
    public string FirstName = "";
    public string LastName = "";
}

 void empMethod()
 {
 Northwnd db = new Northwnd(@"c:\northwnd.mdf");
 var empQuery =
     from emp in db.Employees
     select new Name
     {
         FirstName = emp.FirstName,
         LastName = emp.LastName
     };
}

Example

The following example uses Select and Where in Visual Basic (select and where in C#) to return a filtered sequence of contact names for customers in London.

C#
var contactQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust.ContactName;

Example

The following example uses a Select clause in Visual Basic (select clause in C#) and anonymous types to return a shaped subset of the data about customers.

C#
var custQuery =
    from cust in db.Customers
    select new
    {
        cust.CustomerID,
        CompanyInfo = new { cust.CompanyName, cust.City, cust.Country },
        ContactInfo = new { cust.ContactName, cust.ContactTitle }
    };

Example

The following example uses nested queries to return the following results:

  • A sequence of all orders and their corresponding OrderIDs.

  • A subsequence of the items in the order for which there is a discount.

  • The amount of money saved if the cost of shipping is not included.

C#
var ordQuery =
    from ord in db.Orders
    select new
    {
        ord.OrderID,
        DiscountedProducts =
            from od in ord.OrderDetails
            where od.Discount > 0.0
            select od,
        FreeShippingDiscount = ord.Freight
    };

See also

How to: Insert Rows Into the Database

You insert rows into a database by adding objects to the associated LINQ to SQL Table<TEntity> collection and then submitting the changes to the database. LINQ to SQL translates your changes into the appropriate SQL INSERT commands.

Note

You can override LINQ to SQL default methods for Insert, Update, and Delete database operations. For more information, see Customizing Insert, Update, and Delete Operations.

Developers using Visual Studio can use the Object Relational Designer to develop stored procedures for the same purpose.

The following steps assume that a valid DataContext connects you to the Northwind database. For more information, see How to: Connect to a Database.

To insert a row into the database

  1. Create a new object that includes the column data to be submitted.

  2. Add the new object to the LINQ to SQL Table collection associated with the target table in the database.

  3. Submit the change to the database.

Example

The following code example creates a new object of type Order and populates it with appropriate values. It then adds the new object to the Order collection. Finally, it submits the change to the database as a new row in the Orders table.

C#
// Create a new Order object.
Order ord = new Order
{
    OrderID = 12000,
    ShipCity = "Seattle",
    OrderDate = DateTime.Now
    // …
};

// Add the new object to the Orders collection.
db.Orders.InsertOnSubmit(ord);

// Submit the change to the database.
try
{
    db.SubmitChanges();
}
catch (Exception e)
{
    Console.WriteLine(e);
    // Make some adjustments.
    // ...
    // Try again.
    db.SubmitChanges();
}

See also

How to: Update Rows in the Database

You can update rows in a database by modifying member values of the objects associated with the LINQ to SQL Table<TEntity> collection and then submitting the changes to the database. LINQ to SQL translates your changes into the appropriate SQL UPDATE commands.

Note

You can override LINQ to SQL default methods for Insert, Update, and Delete database operations. For more information, see Customizing Insert, Update, and Delete Operations.

Developers using Visual Studio can use the Object Relational Designer to develop stored procedures for the same purpose.

The following steps assume that a valid DataContext connects you to the Northwind database. For more information, see How to: Connect to a Database.

To update a row in the database

  1. Query the database for the row to be updated.

  2. Make desired changes to member values in the resulting LINQ to SQL object.

  3. Submit the changes to the database.

Example

The following example queries the database for order #11000, and then changes the values of ShipName and ShipVia in the resulting Order object. Finally, the changes to these member values are submitted to the database as changes in the ShipName and ShipVia columns.

C#
// Query the database for the row to be updated.
var query =
    from ord in db.Orders
    where ord.OrderID == 11000
    select ord;

// Execute the query, and change the column values
// you want to change.
foreach (Order ord in query)
{
    ord.ShipName = "Mariner";
    ord.ShipVia = 2;
    // Insert any additional changes to column values.
}

// Submit the changes to the database.
try
{
    db.SubmitChanges();
}
catch (Exception e)
{
    Console.WriteLine(e);
    // Provide for exceptions.
}

See also

How to: Delete Rows From the Database

You can delete rows in a database by removing the corresponding LINQ to SQL objects from their table-related collection. LINQ to SQL translates your changes to the appropriate SQL DELETE commands.

LINQ to SQL does not support or recognize cascade-delete operations. If you want to delete a row in a table that has constraints against it, you must complete either of the following tasks:

  • Set the ON DELETE CASCADE rule in the foreign-key constraint in the database.

  • Use your own code to first delete the child objects that prevent the parent object from being deleted.

Otherwise, an exception is thrown. See the second code example later in this topic.

Note

You can override LINQ to SQL default methods for Insert, Update, and Delete database operations. For more information, see Customizing Insert, Update, and Delete Operations.

Developers using Visual Studio can use the Object Relational Designer to develop stored procedures for the same purpose.

The following steps assume that a valid DataContext connects you to the Northwind database. For more information, see How to: Connect to a Database.

To delete a row in the database

  1. Query the database for the row to be deleted.

  2. Call the DeleteOnSubmit method.

  3. Submit the change to the database.

Example

This first code example queries the database for order details that belong to Order #11000, marks these order details for deletion, and submits these changes to the database.

C#
// Query the database for the rows to be deleted.
var deleteOrderDetails =
    from details in db.OrderDetails
    where details.OrderID == 11000
    select details;

foreach (var detail in deleteOrderDetails)
{
    db.OrderDetails.DeleteOnSubmit(detail);
}
                        
try
{
    db.SubmitChanges();
}
catch (Exception e)
{
    Console.WriteLine(e);
    // Provide for exceptions.
}

Example

In this second example, the objective is to remove an order (#10250). The code first examines the OrderDetails table to see whether the order to be removed has children there. If the order has children, first the children and then the order are marked for removal. The DataContext puts the actual deletes in correct order so that delete commands sent to the database abide by the database constraints.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

db.Log = Console.Out;

// Specify order to be removed from database
int reqOrder = 10250;

// Fetch OrderDetails for requested order.
var ordDetailQuery =
    from odq in db.OrderDetails
    where odq.OrderID == reqOrder
    select odq;

foreach (var selectedDetail in ordDetailQuery)
{
    Console.WriteLine(selectedDetail.Product.ProductID);
    db.OrderDetails.DeleteOnSubmit(selectedDetail);
}

// Display progress.
Console.WriteLine("detail section finished.");
Console.ReadLine();

// Determine from Detail collection whether parent exists.
if (ordDetailQuery.Any())
{
    Console.WriteLine("The parent is presesnt in the Orders collection.");
    // Fetch Order.
    try
    {
        var ordFetch =
            (from ofetch in db.Orders
             where ofetch.OrderID == reqOrder
             select ofetch).First();
        db.Orders.DeleteOnSubmit(ordFetch);
        Console.WriteLine("{0} OrderID is marked for deletion.", ordFetch.OrderID);
    }
    catch (Exception e)
    {
        Console.WriteLine(e.Message);
        Console.ReadLine();
    }
}
else
{
    Console.WriteLine("There was no parent in the Orders collection.");
}


// Display progress.
Console.WriteLine("Order section finished.");
Console.ReadLine();

try
{
    db.SubmitChanges();
}
catch (Exception e)
{
    Console.WriteLine(e.Message);
    Console.ReadLine();
}

// Display progress.
Console.WriteLine("Submit finished.");
Console.ReadLine();

See also

How to: Submit Changes to the Database

Regardless of how many changes you make to your objects, changes are made only to in-memory replicas. You have made no changes to the actual data in the database. Your changes are not transmitted to the server until you explicitly call SubmitChanges on the DataContext.

When you make this call, the DataContext tries to translate your changes into equivalent SQL commands. You can use your own custom logic to override these actions, but the order of submission is orchestrated by a service of the DataContext known as the change processor. The sequence of events is as follows:

  1. When you call SubmitChanges, LINQ to SQL examines the set of known objects to determine whether new instances have been attached to them. If they have, these new instances are added to the set of tracked objects.

  2. All objects that have pending changes are ordered into a sequence of objects based on the dependencies between them. Objects whose changes depend on other objects are sequenced after their dependencies.

  3. Immediately before any actual changes are transmitted, LINQ to SQL starts a transaction to encapsulate the series of individual commands.

  4. The changes to the objects are translated one by one to SQL commands and sent to the server.

At this point, any errors detected by the database cause the submission process to stop, and an exception is raised. All changes to the database are rolled back as if no submissions ever occurred. The DataContext still has a full recording of all changes. You can therefore try to correct the problem and call SubmitChanges again, as in the code example that follows.

Example

When the transaction around the submission is completed successfully, the DataContext accepts the changes to the objects by ignoring the change-tracking information.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");
// Make changes here. 
try
{
    db.SubmitChanges();
}
catch (ChangeConflictException e)
{
    Console.WriteLine(e.Message);
    // Make some adjustments.
    // ...
    // Try again.
    db.SubmitChanges();
}

See also

How to: Bracket Data Submissions by Using Transactions

You can use TransactionScope to bracket your submissions to the database. For more information, see Transaction Support.

Example

The following code encloses the database submission in a TransactionScope.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");
using (TransactionScope ts = new TransactionScope())
{
    try
    {
        Product prod1 = db.Products.First(p => p.ProductID == 4);
        Product prod2 = db.Products.First(p => p.ProductID == 5);
        prod1.UnitsInStock -= 3;
        prod2.UnitsInStock -= 5;
        db.SubmitChanges();
ts.Complete();
    }
    catch (Exception e)
    {
        Console.WriteLine(e.Message);
    }
}

See also

How to: Dynamically Create a Database

In LINQ to SQL, an object model is mapped to a relational database. Mapping is enabled by using attribute-based mapping or an external mapping file to describe the structure of the relational database. In both scenarios, there is enough information about the relational database that you can create a new instance of the database using the DataContext.CreateDatabase method.

The DataContext.CreateDatabase method creates a replica of the database only to the extent of the information encoded in the object model. Mapping files and attributes from your object model might not encode everything about the structure of an existing database. Mapping information does not represent the contents of user-defined functions, stored procedures, triggers, or check constraints. This behavior is sufficient for a variety of databases.

You can use the DataContext.CreateDatabase method in any number of scenarios, especially if a known data provider like Microsoft SQL Server 2008 is available. Typical scenarios include the following:

  • You are building an application that automatically installs itself on a customer system.

  • You are building a client application that needs a local database to save its offline state.

You can also use the DataContext.CreateDatabase method with SQL Server by using an .mdf file or a catalog name, depending on your connection string. LINQ to SQL uses the connection string to define the database to be created and on which server the database is to be created.

Note

Whenever possible, use Windows Integrated Security to connect to the database so that passwords are not required in the connection string.

Example

The following code provides an example of how to create a new database named MyDVDs.mdf.

C#
public class MyDVDs : DataContext
{
    public Table<DVD> DVDs;
    public MyDVDs(string connection) : base(connection) { }
}

[Table(Name = "DVDTable")]
public class DVD
{
    [Column(IsPrimaryKey = true)]
    public string Title;
    [Column]
    public string Rating;
}

Example

You can use the object model to create a database by doing the following:

C#
public void CreateDatabase()
{
    MyDVDs db = new MyDVDs("c:\\mydvds.mdf");
    db.CreateDatabase();
}

Example

When building an application that automatically installs itself on a customer system, see if the database already exists and drop it before creating a new one. The DataContext class provides the DatabaseExists and DeleteDatabase methods to help you with this process.

The following example shows one way these methods can be used to implement this approach:

C#
public void CreateDatabase2()
{
    MyDVDs db = new MyDVDs(@"c:\mydvds.mdf");
    if (db.DatabaseExists())
    {
        Console.WriteLine("Deleting old database...");
        db.DeleteDatabase();
    }
    db.CreateDatabase();
}

See also

How to: Manage Change Conflicts

LINQ to SQL provides a collection of APIs to help you discover, evaluate, and resolve concurrency conflicts.

In This Section

How to: Detect and Resolve Conflicting Submissions
Describes how to detect and resolve concurrency conflicts.

How to: Specify When Concurrency Exceptions are Thrown
Describes how to specify when you should be informed of concurrency conflicts.

How to: Specify Which Members are Tested for Concurrency Conflicts
Describes how to attribute members to specify whether they are checked for concurrency conflicts.

How to: Retrieve Entity Conflict Information
Describes how to gather information about entity conflicts.

How to: Retrieve Member Conflict Information
Describes how to gather information about member conflicts.

How to: Resolve Conflicts by Retaining Database Values
Describes how to overwrite current values with database values.

How to: Resolve Conflicts by Overwriting Database Values
Describes how to keep current values by overwriting database values.

How to: Resolve Conflicts by Merging with Database Values
Describes how to resolve a conflict by merging database and current values.

Related Sections

Optimistic Concurrency: Overview
Explains the terms that apply to optimistic concurrency in LINQ to SQL.

How to: Detect and Resolve Conflicting Submissions

LINQ to SQL provides many resources for detecting and resolving conflicts that stem from multi-user changes to the database. For more information, see How to: Manage Change Conflicts.

Example

The following example shows a try/catch block that catches a ChangeConflictException exception. Entity and member information for each conflict is displayed in the console window.

Note

You must include the using System.Reflection directive (Imports System.Reflection in Visual Basic) to support the information retrieval. For more information, see System.Reflection.

C#
// using System.Reflection;
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

Customer newCust = new Customer();
newCust.City = "Auburn";
newCust.CustomerID = "AUBUR";
newCust.CompanyName = "AubCo";
db.Customers.InsertOnSubmit(newCust);

try
{
    db.SubmitChanges(ConflictMode.ContinueOnConflict);
}
catch (ChangeConflictException e)
{
    Console.WriteLine("Optimistic concurrency error.");
    Console.WriteLine(e.Message);
    Console.ReadLine();
    foreach (ObjectChangeConflict occ in db.ChangeConflicts)
    {
        MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType());
        Customer entityInConflict = (Customer)occ.Object;
        Console.WriteLine("Table name: {0}", metatable.TableName);
        Console.Write("Customer ID: ");
        Console.WriteLine(entityInConflict.CustomerID);
        foreach (MemberChangeConflict mcc in occ.MemberConflicts)
        {
            object currVal = mcc.CurrentValue;
            object origVal = mcc.OriginalValue;
            object databaseVal = mcc.DatabaseValue;
            MemberInfo mi = mcc.Member;
            Console.WriteLine("Member: {0}", mi.Name);
            Console.WriteLine("current value: {0}", currVal);
            Console.WriteLine("original value: {0}", origVal);
            Console.WriteLine("database value: {0}", databaseVal);
        }
    }
}
catch (Exception ee)
{
    // Catch other exceptions.
    Console.WriteLine(ee.Message);
}
finally
{
    Console.WriteLine("TryCatch block has finished.");
}

See also

How to: Specify When Concurrency Exceptions are Thrown

In LINQ to SQL, a ChangeConflictException exception is thrown when objects do not update because of optimistic concurrency conflicts. For more information, see Optimistic Concurrency: Overview.

Before you submit your changes to the database, you can specify when concurrency exceptions should be thrown:

  • Throw the exception at the first failure (FailOnFirstConflict).

  • Finish all update tries, accumulate all failures, and report the accumulated failures in the exception (ContinueOnConflict).

When thrown, the ChangeConflictException exception provides access to a ChangeConflictCollection collection. This collection provides details for each conflict (mapped to a single failed update try), including access to the MemberConflicts collection. Each member conflict maps to a single member in the update that failed the concurrency check.

Example

The following code shows examples of both values.

C#
Northwnd db = new Northwnd("...");

// Create, update, delete code.

db.SubmitChanges(ConflictMode.FailOnFirstConflict);
// or
db.SubmitChanges(ConflictMode.ContinueOnConflict);

See also

How to: Specify Which Members are Tested for Concurrency Conflicts

Apply one of three enums to the LINQ to SQL UpdateCheck property on a ColumnAttribute attribute to specify which members are to be included in update checks for the detection of optimistic concurrency conflicts.

The UpdateCheck property (mapped at design time) is used together with run-time concurrency features in LINQ to SQL. For more information, see Optimistic Concurrency: Overview.

Note

Original member values are compared with the current database state as long as no member is designated as IsVersion=true. For more information, see IsVersion.

For code examples, see UpdateCheck.

To always use this member for detecting conflicts

  1. Add the UpdateCheck property to the ColumnAttribute attribute.

  2. Set the UpdateCheck property value to Always.

To never use this member for detecting conflicts

  1. Add the UpdateCheck property to the ColumnAttribute attribute.

  2. Set the UpdateCheck property value to Never.

To use this member for detecting conflicts only when the application has changed the value of the member

  1. Add the UpdateCheck property to the ColumnAttribute attribute.

  2. Set the UpdateCheck property value to WhenChanged.

Example

The following example specifies that HomePage objects should never be tested during update checks. For more information, see UpdateCheck.

C#
[Column(Storage="_HomePage", DbType="NText", UpdateCheck=UpdateCheck.Never)]
public string HomePage
{
    get
    {
        return this._HomePage;
    }
    set
    {
        if ((this._HomePage != value))
    {
        this.OnHomePageChanging(value);
        this.SendPropertyChanging();
            this._HomePage = value;
        this.SendPropertyChanged("HomePage");
            this.OnHomePageChanged();
    }
    }
}

See also

How to: Retrieve Entity Conflict Information

You can use objects of the ObjectChangeConflict class to provide information about conflicts revealed by ChangeConflictException exceptions. For more information, see Optimistic Concurrency: Overview.

Example

The following example iterates through a list of accumulated conflicts.

C#
Northwnd db = new Northwnd("...");

try
{
    db.SubmitChanges(ConflictMode.ContinueOnConflict);
}

catch (ChangeConflictException e)
{
    Console.WriteLine("Optimistic concurrency error.");
    Console.WriteLine(e.Message);
    foreach (ObjectChangeConflict occ in db.ChangeConflicts)
    {
        MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType());
        Customer entityInConflict = (Customer)occ.Object;
        Console.WriteLine("Table name: {0}", metatable.TableName);
        Console.Write("Customer ID: ");
        Console.WriteLine(entityInConflict.CustomerID);
        Console.ReadLine();
    }
}

See also

How to: Retrieve Member Conflict Information

You can use the MemberChangeConflict class to retrieve information about individual members in conflict. In this same context you can provide for custom handling of the conflict for any member. For more information, see Optimistic Concurrency: Overview.

Example

The following code iterates through the ObjectChangeConflict objects. For each object, it then iterates through the MemberChangeConflict objects.

Note

Include System.Reflection in order to provide Member information.

C#
// Add 'using System.Reflection' for this section.
Northwnd db = new Northwnd("...");
            
try
{
    db.SubmitChanges(ConflictMode.ContinueOnConflict);
}

catch (ChangeConflictException e)
{
    Console.WriteLine("Optimistic concurrency error.");
    Console.WriteLine(e.Message);
    foreach (ObjectChangeConflict occ in db.ChangeConflicts)
    {
        MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType());
        Customer entityInConflict = (Customer)occ.Object;
        Console.WriteLine("Table name: {0}", metatable.TableName);
        Console.Write("Customer ID: ");
        Console.WriteLine(entityInConflict.CustomerID);
        foreach (MemberChangeConflict mcc in occ.MemberConflicts)
        {
            object currVal = mcc.CurrentValue;
            object origVal = mcc.OriginalValue;
            object databaseVal = mcc.DatabaseValue;
            MemberInfo mi = mcc.Member;
            Console.WriteLine("Member: {0}", mi.Name);
            Console.WriteLine("current value: {0}", currVal);
            Console.WriteLine("original value: {0}", origVal);
            Console.WriteLine("database value: {0}", databaseVal);
            Console.ReadLine();
        }
    }
}

See also

How to: Resolve Conflicts by Retaining Database Values

To reconcile differences between expected and actual database values before you try to resubmit your changes, you can use OverwriteCurrentValues to retain the values found in the database. The current values in the object model are then overwritten. For more information, see Optimistic Concurrency: Overview.

Note

In all cases, the record on the client is first refreshed by retrieving the updated data from the database. This action makes sure that the next update try will not fail on the same concurrency checks.

Example

In this scenario, a ChangeConflictException exception is thrown when User1 tries to submit changes, because User2 has in the meantime changed the Assistant and Department columns. The following table shows the situation.

Manager Assistant Department
Original database state when queried by User1 and User2. Alfreds Maria Sales
User1 prepares to submit these changes. Alfred Marketing
User2 has already submitted these changes. Mary Service

User1 decides to resolve this conflict by having the newer database values overwrite the current values in the object model.

When User1 resolves the conflict by using OverwriteCurrentValues, the result in the database is as follows in the table:

Manager Assistant Department
New state after conflict resolution. Alfreds

(original)
Mary

(from User2)
Service

(from User2)

The following example code shows how to overwrite current values in the object model with the database values. (No inspection or custom handling of individual member conflicts occurs.)

C#
Northwnd db = new Northwnd("...");
try
{
    db.SubmitChanges(ConflictMode.ContinueOnConflict);
}

catch (ChangeConflictException e)
{
    Console.WriteLine(e.Message);
    foreach (ObjectChangeConflict occ in db.ChangeConflicts)
    {
        // All database values overwrite current values.
        occ.Resolve(RefreshMode.OverwriteCurrentValues);
    }
}

See also

How to: Resolve Conflicts by Overwriting Database Values

To reconcile differences between expected and actual database values before you try to resubmit your changes, you can use KeepCurrentValues to overwrite database values. For more information, see Optimistic Concurrency: Overview.

Note

In all cases, the record on the client is first refreshed by retrieving the updated data from the database. This action makes sure that the next update try will not fail on the same concurrency checks.

Example

In this scenario, an ChangeConflictException exception is thrown when User1 tries to submit changes, because User2 has in the meantime changed the Assistant and Department columns. The following table shows the situation.

Manager Assistant Department
Original database state when queried by User1 and User2. Alfreds Maria Sales
User1 prepares to submit these changes. Alfred Marketing
User2 has already submitted these changes. Mary Service

User1 decides to resolve this conflict by overwriting database values with the current client member values.

When User1 resolves the conflict by using KeepCurrentValues, the result in the database is as in following table:

Manager Assistant Department
New state after conflict resolution. Alfred

(from User1)
Maria

(original)
Marketing

(from User1)

The following example code shows how to overwrite database values with the current client member values. (No inspection or custom handling of individual member conflicts occurs.)

C#
try
{
    db.SubmitChanges(ConflictMode.ContinueOnConflict);
}

catch (ChangeConflictException e)
{
    Console.WriteLine(e.Message);
    foreach (ObjectChangeConflict occ in db.ChangeConflicts)
    {
        //No database values are merged into current.
        occ.Resolve(RefreshMode.KeepCurrentValues);
    }
}

See also

How to: Resolve Conflicts by Merging with Database Values

To reconcile differences between expected and actual database values before you try to resubmit your changes, you can use KeepChanges to merge database values with the current client member values. For more information, see Optimistic Concurrency: Overview.

Note

In all cases, the record on the client is first refreshed by retrieving the updated data from the database. This action makes sure that the next update try will not fail on the same concurrency checks.

Example

In this scenario, a ChangeConflictException exception is thrown when User1 tries to submit changes, because User2 has in the meantime changed the Assistant and Department columns. The following table shows the situation.

Manager Assistant Department
Original database state when queried by User1 and User2. Alfreds Maria Sales
User1 prepares to submit these changes. Alfred Marketing
User2 has already submitted these changes. Mary Service

User1 decides to resolve this conflict by merging database values with the current client member values. The result will be that database values are overwritten only when the current changeset has also modified that value.

When User1 resolves the conflict by using KeepChanges, the result in the database is as in the following table:

Manager Assistant Department
New state after conflict resolution. Alfred

(from User1)
Mary

(from User2)
Marketing

(from User1)

The following example shows how to merge database values with the current client member values (unless the client has also changed that value). No inspection or custom handling of individual member conflicts occurs.

C#
try
{
    db.SubmitChanges(ConflictMode.ContinueOnConflict);
}

catch (ChangeConflictException e)
{
    Console.WriteLine(e.Message);
    // Automerge database values for members that client
    // has not modified.
    foreach (ObjectChangeConflict occ in db.ChangeConflicts)
    {
        occ.Resolve(RefreshMode.KeepChanges);
    }
}

// Submit succeeds on second try.
db.SubmitChanges(ConflictMode.FailOnFirstConflict);

See also

Debugging Support

LINQ to SQL provides general debugging support for LINQ to SQL projects. Also see Debugging LINQ or Debugging LINQ.

LINQ to SQL also provides special tools for viewing SQL code. For more information, see the topics in this section.

In This Section

How to: Display Generated SQL
Describes how to use DataContext properties to view query activity.

How to: Display a ChangeSet
Describes how to show changes being sent to the database.

How to: Display LINQ to SQL Commands
Describes how to display SQL commands and other information.

Troubleshooting
Presents common scenarios whose causes might be hard to determine.

See also

How to: Display Generated SQL

You can view the SQL code generated for queries and change processing by using the Log property. This approach can be useful for understanding LINQ to SQL functionality and for debugging specific problems.

Example

The following example uses the Log property to display SQL code in the console window before the code is executed. You can use this property with query, insert, update, and delete commands.

The lines from the console window are what you see when you execute the Visual Basic or C# code that follows.

SELECT [t0].[CustomerID], [t0].[CompanyName], [t0].[ContactName], [t0].[ContactT  
itle], [t0].[Address], [t0].[City], [t0].[Region], [t0].[PostalCode], [t0].[Coun  
try], [t0].[Phone], [t0].[Fax]  
FROM [dbo].[Customers] AS [t0]  
WHERE [t0].[City] = @p0  
-- @p0: Input String (Size = 6; Prec = 0; Scale = 0) [London]  
-- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.20810.0  
AROUT  
BSBEV  
CONSH  
EASTC  
NORTS  
SEVES  
C#
db.Log = Console.Out;
IQueryable<Customer> custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

foreach(Customer custObj in custQuery)
{
    Console.WriteLine(custObj.CustomerID);
}

See also

How to: Display a ChangeSet

You can view changes tracked by a DataContext by using GetChangeSet.

Example

The following example retrieves customers whose city is London, changes the city to Paris, and submits the changes back to the database.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

var custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;
  
foreach (Customer custObj in custQuery)
{
    Console.WriteLine("CustomerID: {0}", custObj.CustomerID);
    Console.WriteLine("\tOriginal value: {0}", custObj.City);
    custObj.City = "Paris";
    Console.WriteLine("\tUpdated value: {0}", custObj.City);
}
           
ChangeSet cs = db.GetChangeSet();
Console.Write("Total changes: {0}", cs);
// Freeze the console window.
Console.ReadLine();

db.SubmitChanges();

Output from this code appears similar to the following. Note that the summary at the end shows that eight changes were made.

console
CustomerID: AROUT
  Original value: London
  Updated value: Paris
CustomerID: BSBEV
  Original value: London
  Updated value: Paris
CustomerID: CONSH
  Original value: London
  Updated value: Paris
CustomerID: EASTC
  Original value: London
  Updated value: Paris
CustomerID: NORTS
   Original value: London
   Updated value: Paris
CustomerID: PARIS
   Original value: London
   Updated value: Paris
CustomerID: SEVES
   Original value: London
   Updated value: Paris
CustomerID: SPECD
   Original value: London
   Updated value: Paris
Total changes: {Added: 0, Removed: 0, Modified: 8}

See also

How to: Display LINQ to SQL Commands

Use GetCommand to display SQL commands and other information.

Example

In the following example, the console window displays the output from the query, followed by the SQL commands that are generated, the type of commands, and the type of connection.

C#
// using System.Data.Common;
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

var q =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

Console.WriteLine("Customers from London:");
foreach (var z in q)
{
    Console.WriteLine("\t {0}",z.ContactName);
}

DbCommand dc = db.GetCommand(q);
Console.WriteLine("\nCommand Text: \n{0}",dc.CommandText);
Console.WriteLine("\nCommand Type: {0}",dc.CommandType);
Console.WriteLine("\nConnection: {0}",dc.Connection);

Console.ReadLine();

Output appears as follows:

Customers from London:  
    Thomas Hardy  
    Victoria Ashworth  
    Elizabeth Brown  
    Ann Devon  
    Simon Crowther  
    Marie Bertrand  
    Hari Kumar  
    Dominique Perrier  
Command Text:  
SELECT [t0].[CustomerID], [t0].[CompanyName], [t0].[ContactName], [t0].[ContactT  
itle], [t0].[Address], [t0].[City], [t0].[Region], [t0].[PostalCode], [t0].[Coun  
try], [t0].[Phone], [t0].[Fax]  
FROM [dbo].[Customers] AS [t0]  
WHERE [t0].[City] = @p0  
  
Command Type: Text  
  
Connection: System.Data.SqlClient.SqlConnection  

See also

Troubleshooting

The following information exposes some issues you might encounter in your LINQ to SQL applications, and provides suggestions to avoid or otherwise reduce the effect of these issues.

Additional issues are addressed in Frequently Asked Questions.

Unsupported Standard Query Operators

LINQ to SQL does not support all standard query operator methods (for example, ElementAt). As a result, projects that compile can still produce run-time errors. For more information, see Standard Query Operator Translation.

Memory Issues

If a query involves an in-memory collection and LINQ to SQL Table<TEntity>, the query might be executed in memory, depending on the order in which the two collections are specified. If the query must be executed in memory, then the data from the database table will need to be retrieved.

This approach is inefficient and could result in significant memory and processor usage. Try to avoid such multi-domain queries.

File Names and SQLMetal

To specify an input file name, add the name to the command line as the input file. Including the file name in the connection string (using the /conn option) is not supported. For more information, see SqlMetal.exe (Code Generation Tool).

Class Library Projects

The Object Relational Designer creates a connection string in the app.config file of the project. In class library projects, the app.config file is not used. LINQ to SQL uses the Connection String provided in the design-time files. Changing the value in app.config does not change the database to which your application connects.

Cascade Delete

LINQ to SQL does not support or recognize cascade-delete operations. If you want to delete a row in a table that has constraints against it, you must do either of the following:

  • Set the ON DELETE CASCADE rule in the foreign-key constraint in the database.

  • Use your own code to first delete the child objects that prevent the parent object from being deleted.

Otherwise, a SqlException exception is thrown.

For more information, see How to: Delete Rows From the Database.

Expression Not Queryable

If you get the "Expression [expression] is not queryable; are you missing an assembly reference?" error, make sure of the following:

  • Your application is targeting .NET Compact Framework 3.5.

  • You have a reference to System.Core.dll and System.Data.Linq.dll.

  • You have an Imports (Visual Basic) or using (C#) directive for System.Linq and System.Data.Linq.

DuplicateKeyException

In the course of debugging a LINQ to SQL project, you might traverse an entity's relations. Doing so brings these items into the cache, and LINQ to SQL becomes aware of their presence. If you then try to execute Attach or InsertOnSubmit or a similar method that produces multiple rows that have the same key, a DuplicateKeyException is thrown.

String Concatenation Exceptions

Concatenation on operands mapped to [n]text and other [n][var]char is not supported. An exception is thrown for concatenation of strings mapped to the two different sets of types. For more information, see System.String Methods.

Skip and Take Exceptions in SQL Server 2000

You must use identity members (IsPrimaryKey) when you use Take or Skip against a SQL Server 2000 database. The query must be against a single table (that is, not a join), or be a Distinct, Except, Intersect, or Union operation, and must not include a Concat operation. For more information, see the "SQL Server 2000 Support" section in Standard Query Operator Translation.

This requirement does not apply to SQL Server 2005.

GroupBy InvalidOperationException

This exception is thrown when a column value is null in a GroupBy query that groups by a boolean expression, such as group x by (Phone==@phone). Because the expression is a boolean, the key is inferred to be boolean, not nullable boolean. When the translated comparison produces a null, an attempt is made to assign a nullable boolean to a boolean, and the exception is thrown.

To avoid this situation (assuming you want to treat nulls as false), use an approach such as the following:

GroupBy="(Phone != null) && (Phone=@Phone)"

OnCreated() Partial Method

The generated method OnCreated() is called each time the object constructor is called, including the scenario in which LINQ to SQL calls the constructor to make a copy for original values. Take this behavior into account if you implement the OnCreated() method in your own partial class.

See also

Background Information

The topics in this section pertain to concepts and procedures that extend beyond the basics about using LINQ to SQL.

Follow these steps to find additional examples of LINQ to SQL code and applications:

In This Section

ADO.NET and LINQ to SQL
Describes the relationship of ADO.NET and LINQ to SQL.

Analyzing LINQ to SQL Source Code
Describes how to analyze LINQ to SQL mapping by generating and viewing source code from the Northwind sample database.

Customizing Insert, Update, and Delete Operations
Describes how to add validation code and other customizations.

Data Binding
Describes how LINQ to SQL uses IListSource to support data binding.

Inheritance Support
Describes the role of inheritance in the LINQ to SQL object model, and how to use related operators in your queries.

Local Method Calls
Describes LINQ to SQL support for local method calls.

N-Tier and Remote Applications with LINQ to SQL
Provides detailed information for multi-tier applications that use LINQ to SQL.

Object Identity
Describes object identity in the LINQ to SQL object model, and explains how this feature differs from object identity in a database.

The LINQ to SQL Object Model
Describes the object model and its relationship to the relational data model.

Object States and Change-Tracking
Provides detailed information about how LINQ to SQL tracks changes.

Optimistic Concurrency: Overview
Describes optimistic concurrency and defines terms.

Query Concepts
Describes aspects of queries in LINQ to SQL that differ from queries in LINQ.

Retrieving Objects from the Identity Cache
Describes the types of queries that retrieve objects from the identity cache.

Security in LINQ to SQL
Describes the correct approach to security in database connections.

Serialization
Describes the serialization process in LINQ to SQL applications.

Stored Procedures
Describes how to map stored procedures at design time and how to call them from your application.

Transaction Support
Outlines the three models of transaction that LINQ to SQL supports.

SQL-CLR Type Mismatches
Describes the challenges of mingling different type systems.

SQL-CLR Custom Type Mappings
Provides guidance on customizing type mappings.

User-Defined Functions
Describes how to map user-defined functions at design time and how to call them from your application.

Related Sections

Programming Guide
Includes links to sections that explain various aspects of the LINQ to SQL.

ADO.NET and LINQ to SQL

LINQ to SQL is part of the ADO.NET family of technologies. It is based on services provided by the ADO.NET provider model. You can therefore mix LINQ to SQL code with existing ADO.NET applications and migrate current ADO.NET solutions to LINQ to SQL. The following illustration provides a high-level view of the relationship.

LINQ to SQL and ADO.NET

Connections

You can supply an existing ADO.NET connection when you create a LINQ to SQL DataContext. All operations against the DataContext (including queries) use this provided connection. If the connection is already open, LINQ to SQL leaves it as is when you are finished with it.

C#
string connString = @"Data Source=.\SQLEXPRESS;AttachDbFilename=c:\northwind.mdf;
    Integrated Security=True; Connect Timeout=30; User Instance=True";
SqlConnection nwindConn = new SqlConnection(connString);
nwindConn.Open();

Northwnd interop_db = new Northwnd(nwindConn);

SqlTransaction nwindTxn = nwindConn.BeginTransaction();

try
{
    SqlCommand cmd = new SqlCommand(
        "UPDATE Products SET QuantityPerUnit = 'single item' WHERE ProductID = 3");
    cmd.Connection = nwindConn;
    cmd.Transaction = nwindTxn;
    cmd.ExecuteNonQuery();

    interop_db.Transaction = nwindTxn;

    Product prod1 = interop_db.Products
        .First(p => p.ProductID == 4);
    Product prod2 = interop_db.Products
        .First(p => p.ProductID == 5);
    prod1.UnitsInStock -= 3;
    prod2.UnitsInStock -= 5;

    interop_db.SubmitChanges();

    nwindTxn.Commit();
}
catch (Exception e)
{
    Console.WriteLine(e.Message);
    Console.WriteLine("Error submitting changes... all changes rolled back.");
}

nwindConn.Close();

You can always access the connection and close it yourself by using the Connection property, as in the following code:

C#
db.Connection.Close(); 

Transactions

You can supply your DataContext with your own database transaction when your application has already initiated the transaction and you want your DataContext to be involved.

The preferred method of doing transactions with the .NET Framework is to use the TransactionScope object. By using this approach, you can make distributed transactions that work across databases and other memory-resident resource managers. Transaction scopes require few resources to start. They promote themselves to distributed transactions only when there are multiple connections within the scope of the transaction.

C#
using (TransactionScope ts = new TransactionScope())
{
    db.SubmitChanges();
    ts.Complete();
}

You cannot use this approach for all databases. For example, the SqlClient connection cannot promote system transactions when it works against a SQL Server 2000 server. Instead, it automatically enlists to a full, distributed transaction whenever it sees a transaction scope being used.

Direct SQL Commands

At times you can encounter situations where the ability of the DataContext to query or submit changes is insufficient for the specialized task you want to perform. In these circumstances you can use the ExecuteQuery method to issue SQL commands to the database and convert the query results to objects.

For example, assume that the data for the Customer class is spread over two tables (customer1 and customer2). The following query returns a sequence of Customer objects:

C#
            IEnumerable<Customer> results = db.ExecuteQuery<Customer>(
    @"select c1.custid as CustomerID, c2.custName as ContactName
        from customer1 as c1, customer2 as c2
        where c1.custid = c2.custid"
);

As long as the column names in the tabular results match column properties of your entity class, LINQ to SQL creates your objects out of any SQL query.

Parameters

The ExecuteQuery method accepts parameters. The following code executes a parameterized query:

C#
            IEnumerable<Customer> results = db.ExecuteQuery<Customer>(
    "select contactname from customers where city = {0}",
    "London"
);

Note

Parameters are expressed in the query text by using the same curly notation used by Console.WriteLine() and String.Format(). String.Format() takes the query string you provide and substitutes the curly-braced parameters with generated parameter names such as @p0, @p1 …, @p(n).

See also

Analyzing LINQ to SQL Source Code

By using the following steps, you can produce LINQ to SQL source code from the Northwind sample database. You can compare elements of the object model with elements of the database to better see how different items are mapped.

Note

Developers using Visual Studio can use the O/R Designer to produce this code.

  1. If you do not already have the Northwind sample database on your development computer, you can download it free of charge. For more information, see Downloading Sample Databases.

  2. Use the SqlMetal command-line tool to generate a Visual Basic or C# source file. For more information, see SqlMetal.exe (Code Generation Tool). By typing the following commands at a command prompt, you can generate Visual Basic and C# source files that include stored procedures and functions:

    • sqlmetal /code:northwind.vb /language:vb "c:\northwnd.mdf" /sprocs /functions /pluralize

    • sqlmetal /code:northwind.cs /language:csharp "c:\northwnd.mdf" /sprocs /functions /pluralize

See also

Customizing Insert, Update, and Delete Operations

By default, LINQ to SQL generates dynamic SQL to implement insert, read, update, and delete operations. In practice, however, you typically customize your application to suit your business needs.

Note

If you are using Visual Studio, you can use the Object Relational Designer to customize insert, update, and delete actions.

This section of topics describes the techniques that LINQ to SQL provides for customizing insert, read, update, and delete operations in your application.

In This Section

Customizing Operations: Overview
Describes the various techniques LINQ to SQL provides for customizing insert, read, update, and delete operations.

Insert, Update, and Delete Operations
Describes the LINQ to SQL default processes for manipulating database data.

Responsibilities of the Developer In Overriding Default Behavior
Describes the role of the developer in implementing requirements not enforced by LINQ to SQL.

Adding Business Logic By Using Partial Methods
Describes how to use partial methods to override autogenerated methods.

Customizing Operations: Overview

By default, LINQ to SQL generates dynamic SQL for insert, update, and delete operations based on mapping. However, in practice you typically want to add your own business logic to provide for security, validation, and so forth.

LINQ to SQL techniques for customizing these operations include the following.

Loading Options

In your queries, you can control how much data related to your main target is retrieved when you connect to the database. This functionality is implemented largely by using DataLoadOptions. For more information, see Deferred versus Immediate Loading.

Partial Methods

In its default mapping, LINQ to SQL provides partial methods to help you implement your business logic. For more information, see Adding Business Logic By Using Partial Methods.

Stored Procedures and User-Defined Functions

LINQ to SQL supports the use of stored procedures and user-defined functions. Stored procedures are frequently used to customize operations. For more information, see Stored Procedures.

See also

Insert, Update, and Delete Operations

You perform Insert, Update, and Delete operations in LINQ to SQL by adding, changing, and removing objects in your object model. By default, LINQ to SQL translates your actions to SQL and submits the changes to the database.

LINQ to SQL offers maximum flexibility in manipulating and persisting changes that you made to your objects. As soon as entity objects are available (either by retrieving them through a query or by constructing them anew), you can change them as typical objects in your application. That is, you can change their values, you can add them to your collections, and you can remove them from your collections. LINQ to SQL tracks your changes and is ready to transmit them back to the database when you call SubmitChanges.

Note

LINQ to SQL does not support or recognize cascade-delete operations. If you want to delete a row in a table that has constraints against it, you must either set the ON DELETE CASCADE rule in the foreign-key constraint in the database, or use your own code to first delete the child objects that prevent the parent object from being deleted. Otherwise, an exception is thrown. For more information, see How to: Delete Rows From the Database.

The following excerpts use the Customer and Order classes from the Northwind sample database. Class definitions are not shown for brevity.

C#
Northwnd db = new Northwnd(@"c:\Northwnd.mdf");

// Query for a specific customer.
var cust =
    (from c in db.Customers
     where c.CustomerID == "ALFKI"
     select c).First();

// Change the name of the contact.
cust.ContactName = "New Contact";

// Create and add a new Order to the Orders collection.
Order ord = new Order { OrderDate = DateTime.Now };
cust.Orders.Add(ord);

// Delete an existing Order.
Order ord0 = cust.Orders[0];

// Removing it from the table also removes it from the Customer’s list.
db.Orders.DeleteOnSubmit(ord0);

// Ask the DataContext to save all the changes.
db.SubmitChanges();

When you call SubmitChanges, LINQ to SQL automatically generates and executes the SQL commands that it must have to transmit your changes back to the database.

Note

You can override this behavior by using your own custom logic, typically by way of a stored procedure. For more information, see Responsibilities of the Developer In Overriding Default Behavior.

Developers using Visual Studio can use the Object Relational Designer to develop stored procedures for this purpose.

See also

Responsibilities of the Developer In Overriding Default Behavior

LINQ to SQL does not enforce the following requirements, but behavior is undefined if these requirements are not satisfied.

  • The overriding method must not call SubmitChanges or Attach. LINQ to SQL throws an exception if these methods are called in an override method.

  • Override methods cannot be used to start, commit, or stop a transaction. The SubmitChanges operation is performed under a transaction. An inner nested transaction can interfere with the outer transaction. Load override methods can start a transaction only after they determine that the operation is not being performed in a Transaction.

  • Override methods are expected to follow the applicable optimistic concurrency mapping. The override method is expected to throw a ChangeConflictException when an optimistic concurrency conflict occurs. LINQ to SQL catches this exception so that you can correctly process the SubmitChanges option provided on SubmitChanges.

  • Create (Insert) and Update override methods are expected to flow back the values for database-generated columns to corresponding object members when the operation is successfully completed.

    For example, if Order.OrderID is mapped to an identity column (autoincrement primary key), then the InsertOrder() override method must retrieve the database-generated ID and set the Order.OrderID member to that ID. Likewise, timestamp members must be updated to the database-generated timestamp values to make sure that the updated objects are consistent. Failure to propagate the database-generated values can cause an inconsistency between the database and the objects tracked by the DataContext.

  • It is the user's responsibility to invoke the correct dynamic API. For example, in the update override method, only the ExecuteDynamicUpdate can be called. LINQ to SQL does not detect or verify whether the invoked dynamic method matches the applicable operation. If an inapplicable method is called (for example, ExecuteDynamicDelete for an object to be updated), the results are undefined.

  • Finally, the overriding method is expected to perform the stated operation. The semantics of LINQ to SQL operations such as eager loading, deferred loading, and SubmitChanges) require the overrides to provide the stated service. For example, a load override that just returns an empty collection without checking the contents in the database will likely lead to inconsistent data.

See also

Adding Business Logic By Using Partial Methods

You can customize Visual Basic and C# generated code in your LINQ to SQL projects by using partial methods. The code that LINQ to SQL generates defines signatures as one part of a partial method. If you want to implement the method, you can add your own partial method. If you do not add your own implementation, the compiler discards the partial methods signature and calls the default methods in LINQ to SQL.

Note

If you are using Visual Studio, you can use the Object Relational Designer to add validation and other customizations to entity classes.

For example, the default mapping for the Customer class in the Northwind sample database includes the following partial method:

C#
partial void OnAddressChanged();

You can implement your own method by adding code such as the following to your own partial Customer class:

C#
public partial class Customer
{
    partial void OnAddressChanged();
    partial void OnAddressChanged()
    {
        // Insert business logic here.
    }
}

This approach is typically used in LINQ to SQL to override default methods for Insert, Update, Delete, and to validate properties during object life-cycle events.

For more information, see Partial Methods (Visual Basic) or partial (Method) (C# Reference) (C#).

Example

Description

The following example shows ExampleClass first as it might be defined by a code-generating tool such as SQLMetal, and then how you might implement only one of the two methods.

Code

C#
// Code-generating tool defines a partial class, including
// two partial methods.
partial class ExampleClass
{
    partial void onFindingMaxOutput();
    partial void onFindingMinOutput();
}

// Developer implements one of the partial methods. Compiler
// discards the signature of the other method.
partial class ExampleClass
{
    partial void onFindingMaxOutput()
    {
        Console.WriteLine("Maximum has been found.");
    }
}

Example

Description

The following example uses the relationship between Shipper and Order entities. Note among the methods the partial methods, InsertShipper and DeleteShipper. These methods override the default partial methods supplied by LINQ to SQL mapping.

Code

C#
public static int LoadOrdersCalled = 0;
private IEnumerable<Order> LoadOrders(Shipper shipper)
{
    LoadOrdersCalled++;
    return this.Orders.Where(o => o.ShipVia == shipper.ShipperID);
}

public static int LoadShipperCalled = 0;
private Shipper LoadShipper(Order order)
{
    LoadShipperCalled++;
    return this.Shippers.Single(s => s.ShipperID == order.ShipVia);
}

public static int InsertShipperCalled = 0;
partial void InsertShipper(Shipper shipper)
{
    InsertShipperCalled++;
    // Call a Web service to perform an insert operation.
    InsertShipperService(shipper);
}

public static int UpdateShipperCalled = 0;
private void UpdateShipper(Shipper original, Shipper current)
{
    Shipper shipper = new Shipper();
    UpdateShipperCalled++;
    // Call a Web service to update shipper.
    InsertShipperService(shipper);
}

public static bool DeleteShipperCalled;
partial void DeleteShipper(Shipper shipper)
{
    DeleteShipperCalled = true;
}

See also

Data Binding

LINQ to SQL supports binding to common controls, such as grid controls. Specifically, LINQ to SQL defines the basic patterns for binding to a data grid and handling master-detail binding, both with regard to display and updating.

Underlying Principle

LINQ to SQL translates LINQ queries to SQL for execution on a database. The results are strongly typed IEnumerable. Because these objects are ordinary common language runtime (CLR) objects, ordinary object data binding can be used to display the results. On the other hand, change operations (inserts, updates, and deletes) require additional steps.

Operation

Implicitly binding to Windows Forms controls is accomplished by implementing IListSource. Data sources generic Table<TEntity> (Table<T> in C# or Table(Of T) in Visual Basic) and generic DataQuery have been updated to implement IListSource. User interface (UI) data-binding engines (Windows Forms and Windows Presentation Foundation) both test whether their data source implements IListSource. Therefore, writing a direct affectation of a query to a data source of a control implicitly calls LINQ to SQL collection generation, as in the following example:

C#
DataGrid dataGrid1 = new DataGrid();
DataGrid dataGrid2 = new DataGrid();
DataGrid dataGrid3 = new DataGrid();

var custQuery =
    from cust in db.Customers
    select cust;
dataGrid1.DataSource = custQuery;
dataGrid2.DataSource = custQuery;
dataGrid2.DataMember = "Orders";

BindingSource bs = new BindingSource();
bs.DataSource = custQuery;
dataGrid3.DataSource = bs;

The same occurs with Windows Presentation Foundation:

C#
ListView listView1 = new ListView();
var custQuery2 =
    from cust in db.Customers
    select cust;

ListViewItem ItemsSource = new ListViewItem();
ItemsSource = (ListViewItem)custQuery2;

Collection generations are implemented by generic Table<TEntity> and generic DataQuery in GetList.

IListSource Implementation

LINQ to SQL implements IListSource in two locations:

  • The data source is a Table<TEntity>: LINQ to SQL browses the table to fill a DataBindingList collection that keeps a reference on the table.

  • The data source is an IQueryable<T>. There are two scenarios:

    • If LINQ to SQL finds the underlying Table<TEntity> from the IQueryable<T>, the source allows for edition and the situation is the same as in the first bullet point.

    • If LINQ to SQL cannot find the underlying Table<TEntity>, the source does not allow for edition (for example, groupby). LINQ to SQL browses the query to fill a generic SortableBindingList, which is a simple BindingList<T> that implements the sorting feature for T entities for a given property.

Specialized Collections

For many features described earlier in this document, BindingList<T> has been specialized to some different classes. These classes are generic SortableBindingList and generic DataBindingList. Both are declared as internal.

Generic SortableBindingList

This class inherits from BindingList<T>, and is a sortable version of BindingList<T>. Sorting is an in-memory solution and never contacts the database itself. BindingList<T> implements IBindingList but does not support sorting by default. However, BindingList<T> implements IBindingList with virtual core methods. You can easily override these methods. Generic SortableBindingList overrides SupportsSortingCore, SortPropertyCore, SortDirectionCore, and ApplySortCore. ApplySortCore is called by ApplySort and sorts the list of T items for a given property.

An exception is raised if the property does not belong to T.

To achieve sorting, LINQ to SQL creates a generic SortableBindingList.PropertyComparer class that inherits from generic IComparer.Compare and implements a default comparer for a given type T, a PropertyDescriptor, and a direction. This class dynamically creates a Comparer of T where T is the PropertyType of the PropertyDescriptor. Then, the default comparer is retrieved from the static generic Comparer. A default instance is obtained by using reflection.

Generic SortableBindingList is also the base class for DataBindingList. Generic SortableBindingList offers two virtual methods for suspending or resuming items add/remove tracking. Those two methods can be used for base features such as sorting, but will really be implemented by upper classes like generic DataBindingList.

Generic DataBindingList

This class inherits from generic SortableBindingLIst. Generic DataBindingList keeps a reference on the underlying generic Table of the generic IQueryable used for the initial filling of the collection. Generic DatabindingList adds tracking for item add/remove to the collection by overriding InsertItem() and RemoveItem(). It also implements the abstract suspend/resume tracking feature to make tracking conditional. This feature makes generic DataBindingList take advantage of all the polymorphic usage of the tracking feature of the parent classes.

Binding to EntitySets

Binding to EntitySet is a special case because EntitySet is already a collection that implements IBindingList. LINQ to SQL adds sorting and canceling (ICancelAddNew) support. An EntitySet class uses an internal list to store entities. This list is a low-level collection based on a generic array, the generic ItemList class.

Adding a Sorting Feature

Arrays offer a sort method (Array.Sort()) that you can be used with a Comparer of T. LINQ to SQL uses the generic SortableBindingList.PropertyComparer class described earlier in this topic to obtain this Comparer for the property and the direction to be sorted on. An ApplySort method is added to generic ItemList to call this feature.

On the EntitySet side, you now have to declare sorting support:

When you use a System.Windows.Forms.BindingSource and bind an EntitySet<TEntity> to the System.Windows.Forms.BindingSource.DataSource, you must call EntitySet<TEntity>.GetNewBindingList to update BindingSource.List.

If you use a System.Windows.Forms.BindingSource and set the BindingSource.DataMember property and set BindingSource.DataSource to a class that has a property named in the BindingSource.DataMember that exposes the EntitySet<TEntity>, you don’t have to call EntitySet<TEntity>.GetNewBindingList to update the BindingSource.List but you lose Sorting capability.

Caching

LINQ to SQL queries implement GetList. When the Windows Forms BindingSource class meets this interface, it calls GetList() threes time for a single connection. To work around this situation, LINQ to SQL implements a cache per instance to store and always return the same generated collection.

Cancellation

IBindingList defines an AddNew method that is used by controls to create a new item from a bound collection. The DataGridView control shows this feature very well when the last visible row contains a star in its header. The star shows you that you can add a new item.

In addition to this feature, a collection can also implement ICancelAddNew. This feature allows for the controls to cancel or validate that the new edited item has been validated or not.

ICancelAddNew is implemented in all LINQ to SQL databound collections (generic SortableBindingList and generic EntitySet). In both implementations the code performs as follows:

  • Lets items be inserted and then removed from the collection.

  • Does not track changes as long as the UI does not commit the edition.

  • Does not track changes as long as the edition is canceled (CancelNew).

  • Allows tracking when the edition is committed (EndNew).

  • Lets the collection behave normally if the new item does not come from AddNew.

Troubleshooting

This section calls out several items that might help troubleshoot your LINQ to SQL data binding applications.

  • You must use properties; using only fields is not sufficient. Windows Forms require this usage.

  • By default, image, varbinary, and timestamp database types map to byte array. Because ToString() is not supported in this scenario, these objects cannot be displayed.

  • A class member mapped to a primary key has a setter, but LINQ to SQL does not support object identity change. Therefore, the primary/unique key that is used in mapping cannot be updated in the database. A change in the grid causes an exception when you call SubmitChanges.

  • If an entity is bound in two separate grids (for example, one master and another detail), a Delete in the master grid is not propagated to the detail grid.

See also

Inheritance Support

LINQ to SQL supports single-table mapping. In other words, a complete inheritance hierarchy is stored in a single database table. The table contains the flattened union of all possible data columns for the whole hierarchy. (A union is the result of combining two tables into one table that has the rows that were present in either of the original tables.) Each row has nulls in the columns that do not apply to the type of the instance represented by the row.

The single-table mapping strategy is the simplest representation of inheritance and provides good performance characteristics for many different categories of queries.

To implement this mapping in LINQ to SQL, you must specify the attributes and attribute properties on the root class of the inheritance hierarchy. For more information, see How to: Map Inheritance Hierarchies.

Developers using Visual Studio can also use the Object Relational Designer to map inheritance hierarchies.

See also

Local Method Calls

A local method call is one that is executed within the object model. A remote method call is one that LINQ to SQL translates to SQL and transmits to the database engine for execution. Local method calls are needed when LINQ to SQL cannot translate the call into SQL. Otherwise, an InvalidOperationException is thrown.

Example 1

In the following example, an Order class is mapped to the Orders table in the Northwind sample database. A local instance method has been added to the class.

In Query 1, the constructor for the Order class is executed locally. In Query 2, if LINQ to SQL tried to translate LocalInstanceMethod()into SQL, the attempt would fail and an InvalidOperationException exception would be thrown. But because LINQ to SQL provides support for local method calls, Query2 will not throw an exception.

C#
// Query 1.
var q1 =
    from ord in db.Orders
    where ord.EmployeeID == 9
    select ord;

foreach (var ordObj in q1)
{
    Console.WriteLine("{0}, {1}", ordObj.OrderID,
        ordObj.ShipVia.Value);
}
C#
// Query 2.
public int LocalInstanceMethod(int x)
{
    return x + 1;
}

void q2()
{
    var q2 =
    from ord in db.Orders
    where ord.EmployeeID == 9
    select new
    {
        member0 = ord.OrderID,
        member1 = ord.LocalInstanceMethod(ord.ShipVia.Value)
    };
}

See also

N-Tier and Remote Applications with LINQ to SQL

You can create n-tier or multitier applications that use LINQ to SQL. Typically, the LINQ to SQL data context, entity classes, and query construction logic are located on the middle tier as the data access layer (DAL). Business logic and any non-persistent data can be implemented completely in partial classes and methods of entities and the data context, or it can be implemented in separate classes.

The client or presentation layer calls methods on the middle-tier's remote interface, and the DAL on that tier will execute queries or stored procedures that are mapped to DataContext methods. The middle tier returns the data to clients typically as XML representations of entities or proxy objects.

On the middle tier, entities are created by the data context, which tracks their state, and manages deferred loading from and submission of changes to the database. These entities are "attached" to the DataContext. However, after the entities are sent to another tier through serialization, they become detached, which means the DataContext is no longer tracking their state. Entities that the client sends back for updates must be reattached to the data context before LINQ to SQL can submit the changes to the database. The client is responsible for providing original values and/or timestamps back to the middle tier if those are required for optimistic concurrency checks.

In ASP.NET applications, the LinqDataSource manages most of this complexity. For more information, see LinqDataSource Web Server Control Overview.

Additional Resources

For more information about how to implement n-tier applications that use LINQ to SQL, see the following topics:

For more information about n-tier applications that use ADO.NET DataSets, see Work with datasets in n-tier applications.

See also

LINQ to SQL N-Tier with ASP.NET

In ASP.NET applications that use LINQ to SQL, you use the LinqDataSource Web server control. The control handles most of the logic that it must have to query against LINQ to SQL, pass the data to the browser, retrieve it, and submit it to the LINQ to SQL DataContext which then updates the database. You just configure the control in the markup, and the control handles all the data transfer between LINQ to SQL and the browser. Because the control handles the interactions with the presentation tier, and LINQ to SQL handles the communication with the data tier, your main focus in ASP.NET multitier applications is on writing your custom business logic.

For more information about LINQDataSource, see LinqDataSource Web Server Control Overview.

See also

LINQ to SQL N-Tier with Web Services

LINQ to SQL is designed especially for use on the middle tier in a loosely-coupled data access layer (DAL) such as a Web service. If the presentation tier is an ASP.NET Web page, then you use the LinqDataSource Web server control to manage the data transfer between the user interface and LINQ to SQL on the middle-tier. If the presentation tier is not an ASP.NET page, then both the middle-tier and the presentation tier must do some additional work to manage the serialization and deserialization of data.

Setting up LINQ to SQL on the Middle Tier

In a Web service or n-tier application, the middle tier contains the data context and the entity classes. You can create these classes manually, or by using either SQLMetal.exe or the Object Relational Designer as described elsewhere in the documentation. At design time, you have the option to make the entity classes serializable. For more information, see How to: Make Entities Serializable. Another option is to create a separate set of classes that encapsulate the data to be serialized, and then project into those serializable types when you return data in your LINQ queries.

You then define the interface with the methods that the clients will call to retrieve, insert and update data. The interface methods wrap your LINQ queries. You can use any kind of serialization mechanism to handle the remote method calls and the serialization of data. The only requirement is that if you have cyclic or bi-directional relationships in your object model, such as that between Customers and Orders in the standard Northwind object model, then you must use a serializer that supports it. The Windows Communication Foundation (WCF) DataContractSerializer supports bi-directional relationships but the XmlSerializer that is used with non-WCF Web services does not. If you select to use the XmlSerializer, then you must make sure that your object model has no cyclic relationships.

For more information about Windows Communication Foundation, see Windows Communication Foundation Services and WCF Data Services in Visual Studio.

Implement your business rules or other domain-specific logic by using the partial classes and methods on the DataContext and entity classes to hook into LINQ to SQL runtime events. For more information, see Implementing N-Tier Business Logic.

Defining the Serializable Types

The client or presentation tier must have type definitions for the classes that it will be receiving from the middle tier. Those types may be the entity classes themselves, or special classes that wrap only certain fields from the entity classes for remoting. In any case, LINQ to SQL is completely unconcerned about how the presentation tier acquires those type definitions. For example, the presentation tier could use WCF to generate the types automatically, or it could have a copy of a DLL in which those types are defined, or it could just define its own versions of the types.

Retrieving and Inserting Data

The middle tier defines an interface that specifies how the presentation tier accesses the data. For example GetProductByID(int productID), or GetCustomers(). On the middle tier, the method body typically creates a new instance of the DataContext, executes a query against one or more of its table. The middle tier then returns the result as an IEnumerable<T>, where T is either an entity class or another type that is used for serialization. The presentation tier never sends or receives query variables directly to or from the middle tier. The two tiers exchange values, objects, and collections of concrete data. After it has received a collection, the presentation tier can use LINQ to Objects to query it if necessary.

When inserting data, the presentation tier can construct a new object and send it to the middle tier, or it can have the middle tier construct the object based on values that it provides. In general, retrieving and inserting data in n-tier applications does not differ much from the process in 2-tier applications. For more information, see Querying the Database and Making and Submitting Data Changes.

Tracking Changes for Updates and Deletes

LINQ to SQL supports optimistic concurrency based on timestamps (also named RowVersions) and on original values. If the database tables have timestamps, then updates and deletions require little extra work on either the middle-tier or presentation tier. However, if you must use original values for optimistic concurrency checks, then the presentation tier is responsible for tracking those values and sending them back when it makes updates. This is because changes that were made to entities on the presentation tier are not tracked on the middle tier. In fact, the original retrieval of an entity, and the eventual update made to it are typically performed by two completely separate instances of the DataContext.

The greater the number of changes that the presentation tier makes, the more complex it becomes to track those changes and package them back to the middle tier. The implementation of a mechanism for communicating changes is completely up to the application. The only requirement is that LINQ to SQL must be given those original values that are required for optimistic concurrency checks.

For more information, see Data Retrieval and CUD Operations in N-Tier Applications (LINQ to SQL).

See also

Implementing Business Logic (LINQ to SQL)

The term "business logic" in this topic refers to any custom rules or validation tests that you apply to data before it is inserted, updated or deleted from the database. Business logic is also sometimes referred to as "business rules" or "domain logic." In n-tier applications it is typically designed as a logical layer so that it can be modified independently of the presentation layer or data access layer. The business logic can be invoked by the data access layer before or after any update, insertion, or deletion of data in the database.

The business logic can be as simple as a schema validation to make sure that the type of the field is compatible with the type of the table column. Or it can consist of a set of objects that interact in arbitrarily complex ways. The rules may be implemented as stored procedures on the database or as in-memory objects. However the business logic is implemented, LINQ to SQL enables you use partial classes and partial methods to separate the business logic from the data access code.

How LINQ to SQL Invokes Your Business Logic

When you generate an entity class at design time, either manually or by using the Object Relational Designer or SQLMetal, it is defined as a partial class. This means that, in a separate code file, you can define another part of the entity class that contains your custom business logic. At compile time, the two parts are merged into a single class. But if you have to regenerate your entity classes by using the Object Relational Designer or SQLMetal, you can do so and your part of the class will not be modified.

The partial classes that define entities and the DataContext contain partial methods. These are the extensibility points that you can use to apply your business logic before and after any update, insert, or delete for an entity or entity property. Partial methods can be thought of as compile-time events. The code-generator defines a method signature and calls the methods in the get and set property accessors, the DataContext constructor, and in some cases behind the scenes when SubmitChanges is called. However, if you do not implement a particular partial method, then all the references to it and the definition are removed at compile time.

In the implementing definition that you write in your separate code file, you can perform whatever custom logic is required. You can use your partial class itself as your domain layer, or you can call from your implementing definition of the partial method into a separate object or objects. Either way, your business logic is cleanly separated from both your data access code and your presentation layer code.

A Closer Look at the Extensibility Points

The following example shows part of the code generated by the Object Relational Designer for the DataContext class that has two tables: Customers and Orders. Note that Insert, Update, and Delete methods are defined for each table in the class.

C#
public partial class MyNorthWindDataContext : System.Data.Linq.DataContext  
    {  
        private static System.Data.Linq.Mapping.MappingSource mappingSource = new AttributeMappingSource();  
  
        #region Extensibility Method Definitions  
        partial void OnCreated();  
        partial void InsertCustomer(Customer instance);  
        partial void UpdateCustomer(Customer instance);  
        partial void DeleteCustomer(Customer instance);  
        partial void InsertOrder(Order instance);  
        partial void UpdateOrder(Order instance);  
        partial void DeleteOrder(Order instance);  
        #endregion  

If you implement the Insert, Update and Delete methods in your partial class, the LINQ to SQL runtime will call them instead of its own default methods when SubmitChanges is called. This enables you to override the default behavior for create / read / update / delete operations. For more information, see Walkthrough: Customizing the insert, update, and delete behavior of entity classes.

The OnCreated method is called in the class constructor.

C#
public MyNorthWindDataContext(string connection) :  
            base(connection, mappingSource)  
        {  
            OnCreated();  
        }  

The entity classes have three methods that are called by the LINQ to SQL runtime when the entity is created, loaded, and validated (when SubmitChanges is called). The entity classes also have two partial methods for each property, one that is called before the property is set, and one that is called after. The following code example shows some of the methods generated for the Customer class:

C#
#region Extensibility Method Definitions  
    partial void OnLoaded();  
    partial void OnValidate();  
    partial void OnCreated();  
    partial void OnCustomerIDChanging(string value);  
    partial void OnCustomerIDChanged();  
    partial void OnCompanyNameChanging(string value);  
    partial void OnCompanyNameChanged();  
// ...additional Changing/Changed methods for each property  

The methods are called in the property set accessor as shown in the following example for the CustomerID property:

C#
public string CustomerID  
{  
    set  
    {  
        if ((this._CustomerID != value))  
        {  
            this.OnCustomerIDChanging(value);  
            this.SendPropertyChanging();  
            this._CustomerID = value;  
            this.SendPropertyChanged("CustomerID");  
            this.OnCustomerIDChanged();  
        }  
     }  
}  

In your part of the class, you write an implementing definition of the method. In Visual Studio, after you type partial you will see IntelliSense for the method definitions in the other part of the class.

C#
partial class Customer   
    {  
        partial void OnCustomerIDChanging(string value)  
        {  
            //Perform custom validation logic here.  
        }  
    }  

For more information about how to add business logic to your application by using partial methods, see the following topics:

How to: Add validation to entity classes

Walkthrough: Customizing the insert, update, and delete behavior of entity classes

Walkthrough: Adding Validation to Entity Classes

See also

Data Retrieval and CUD Operations in N-Tier Applications (LINQ to SQL)

When you serialize entity objects such as Customers or Orders to a client over a network, those entities are detached from their data context. The data context no longer tracks their changes or their associations with other objects. This is not an issue as long as the clients are only reading the data. It is also relatively simple to enable clients to add new rows to a database. However, if your application requires that clients be able to update or delete data, then you must attach the entities to a new data context before you call DataContext.SubmitChanges. In addition, if you are using an optimistic concurrency check with original values, then you will also need a way to provide the database both the original entity and the entity as modified. The Attach methods are provided to enable you to put entities into a new data context after they have been detached.

Even if you are serializing proxy objects in place of the LINQ to SQL entities, you still have to construct an entity on the data access layer (DAL), and attach it to a new System.Data.Linq.DataContext, in order to submit the data to the database.

LINQ to SQL is completely indifferent about how entities are serialized. For more information about how to use the Object Relational Designer and SQLMetal tools to generate classes that are serializable by using Windows Communication Foundation (WCF), see How to: Make Entities Serializable.

Note

Only call the Attach methods on new or deserialized entities. The only way for an entity to be detached from its original data context is for it to be serialized. If you try to attach an undetached entity to a new data context, and that entity still has deferred loaders from its previous data context, LINQ to SQL will thrown an exception. An entity with deferred loaders from two different data contexts could cause unwanted results when you perform insert, update, and delete operations on that entity. For more information about deferred loaders, see Deferred versus Immediate Loading.

Retrieving Data

Client Method Call

The following examples show a sample method call to the DAL from a Windows Forms client. In this example, the DAL is implemented as a Windows Service Library:

C#
private void GetProdsByCat_Click(object sender, EventArgs e)  
{  
    // Create the WCF client proxy.  
    NorthwindServiceReference.Service1Client proxy =   
    new NorthwindClient.NorthwindServiceReference.Service1Client();  
  
    // Call the method on the service.  
    NorthwindServiceReference.Product[] products =   
    proxy.GetProductsByCategory(1);  
  
    // If the database uses original values for concurrency checks,   
    // the client needs to store them and pass them back to the   
    // middle tier along with the new values when updating data.  
    foreach (var v in products)  
    {  
        // Persist to a list<Product> declared at class scope.  
        // Additional change-tracking logic is the responsibility  
        // of the presentation tier and/or middle tier.  
        originalProducts.Add(v);  
    }  
  
    // (Not shown) Bind the products list to a control  
    // and/or perform whatever processing is necessary.  
    }  

Middle Tier Implementation

The following example shows an implementation of the interface method on the middle tier. The following are the two main points to note:

  • The DataContext is declared at method scope.

  • The method returns an IEnumerable collection of the actual results. The serializer will execute the query to send the results back to the client/presentation tier. To access the query results locally on the middle tier, you can force execution by calling ToList or ToArray on the query variable. You can then return that list or array as an IEnumerable.

C#
public IEnumerable<Product> GetProductsByCategory(int categoryID)  
{  
    NorthwindClasses1DataContext db =   
    new NorthwindClasses1DataContext(connectionString);  
  
    IEnumerable<Product> productQuery =  
    from prod in db.Products  
    where prod.CategoryID == categoryID  
    select prod;  
  
    return productQuery.AsEnumerable();   
}  

An instance of a data context should have a lifetime of one "unit of work." In a loosely-coupled environment, a unit of work is typically small, perhaps one optimistic transaction, including a single call to SubmitChanges. Therefore, the data context is created and disposed at method scope. If the unit of work includes calls to business rules logic, then generally you will want to keep the DataContext instance for that whole operation. In any case, DataContext instances are not intended to be kept alive for long periods of time across arbitrary numbers of transactions.

This method will return Product objects but not the collection of Order_Detail objects that are associated with each Product. Use the DataLoadOptions object to change this default behavior. For more information, see How to: Control How Much Related Data Is Retrieved.

Inserting Data

To insert a new object, the presentation tier just calls the relevant method on the middle tier interface, and passes in the new object to insert. In some cases, it may be more efficient for the client to pass in only some values and have the middle tier construct the full object.

Middle Tier Implementation

On the middle tier, a new DataContext is created, the object is attached to the DataContext by using the InsertOnSubmit method, and the object is inserted when SubmitChanges is called. Exceptions, callbacks, and error conditions can be handled just as in any other Web service scenario.

C#
// No call to Attach is necessary for inserts.  
    public void InsertOrder(Order o)  
    {  
        NorthwindClasses1DataContext db = new NorthwindClasses1DataContext(connectionString);  
        db.Orders.InsertOnSubmit(o);  
  
        // Exception handling not shown.  
        db.SubmitChanges();  
    }  

Deleting Data

To delete an existing object from the database, the presentation tier calls the relevant method on the middle tier interface, and passes in its copy that includes original values of the object to be deleted.

Delete operations involve optimistic concurrency checks, and the object to be deleted must first be attached to the new data context. In this example, the Boolean parameter is set to false to indicate that the object does not have a timestamp (RowVersion). If your database table does generate timestamps for each record, then concurrency checks are much simpler, especially for the client. Just pass in either the original or modified object and set the Boolean parameter to true. In any case, on the middle tier it is typically necessary to catch the ChangeConflictException. For more information about how to handle optimistic concurrency conflicts, see Optimistic Concurrency: Overview.

When deleting entities that have foreign key constraints on associated tables, you must first delete all the objects in its EntitySet<TEntity> collections.

C#
// Attach is necessary for deletes.  
public void DeleteOrder(Order order)  
{  
    NorthwindClasses1DataContext db = new NorthwindClasses1DataContext(connectionString);  
  
    db.Orders.Attach(order, false);  
    // This will throw an exception if the order has order details.  
    db.Orders.DeleteOnSubmit(order);  
    try  
    {  
        // ConflictMode is an optional parameter.  
        db.SubmitChanges(ConflictMode.ContinueOnConflict);  
    }  
    catch (ChangeConflictException e)  
    {  
       // Get conflict information, and take actions  
       // that are appropriate for your application.  
       // See MSDN Article How to: Manage Change Conflicts (LINQ to SQL).  
    }  
}  

Updating Data

LINQ to SQL supports updates in these scenarios involving optimistic concurrency:

  • Optimistic concurrency based on timestamps or RowVersion numbers.

  • Optimistic concurrency based on original values of a subset of entity properties.

  • Optimistic concurrency based on the complete original and modified entities.

You can also perform updates or deletes on an entity together with its relations, for example a Customer and a collection of its associated Order objects. When you make modifications on the client to a graph of entity objects and their child (EntitySet) collections, and the optimistic concurrency checks require original values, the client must provide those original values for each entity and EntitySet<TEntity> object. If you want to enable clients to make a set of related updates, deletes, and insertions in a single method call, you must provide the client a way to indicate what type of operation to perform on each entity. On the middle tier, you then must call the appropriate Attach method and then InsertOnSubmit, DeleteAllOnSubmit, or InsertOnSubmit (without Attach, for insertions) for each entity before you call SubmitChanges. Do not retrieve data from the database as a way to obtain original values before you try updates.

For more information about optimistic concurrency, see Optimistic Concurrency: Overview. For detailed information about resolving optimistic concurrency change conflicts, see How to: Manage Change Conflicts.

The following examples demonstrate each scenario:

Optimistic concurrency with timestamps

C#
// Assume that "customer" has been sent by client.  
// Attach with "true" to say this is a modified entity  
// and it can be checked for optimistic concurrency because  
//  it has a column that is marked with "RowVersion" attribute  
db.Customers.Attach(customer, true)  
try  
{  
    // Optional: Specify a ConflictMode value  
    // in call to SubmitChanges.  
    db.SubmitChanges();  
}  
catch(ChangeConflictException e)  
{  
    // Handle conflict based on options provided  
    // See MSDN article How to: Manage Change Conflicts (LINQ to SQL).  
}  

With Subset of Original Values

In this approach, the client returns the complete serialized object, together with the values to be modified.

C#
public void UpdateProductInventory(Product p, short? unitsInStock, short? unitsOnOrder)  
{  
    using (NorthwindClasses1DataContext db = new NorthwindClasses1DataContext(connectionString))  
    {  
        // p is the original unmodified product  
        // that was obtained from the database.  
        // The client kept a copy and returns it now.  
        db.Products.Attach(p, false);  
  
        // Now that the original values are in the data context, apply the changes.  
        p.UnitsInStock = unitsInStock;  
        p.UnitsOnOrder = unitsOnOrder;  
        try  
        {  
             // Optional: Specify a ConflictMode value  
             // in call to SubmitChanges.  
             db.SubmitChanges();  
        }  
        catch (ChangeConflictException e)  
        {  
            // Handle conflict based on provided options.  
            // See MSDN article How to: Manage Change Conflicts  
            // (LINQ to SQL).  
        }  
    }  
}  

With Complete Entities

C#
public void UpdateProductInfo(Product newProd, Product originalProd)  
{  
     using (NorthwindClasses1DataContext db = new  
        NorthwindClasses1DataContext(connectionString))  
     {  
         db.Products.Attach(newProd, originalProd);  
         try  
         {  
               // Optional: Specify a ConflictMode value  
               // in call to SubmitChanges.  
               db.SubmitChanges();  
         }  
        catch (ChangeConflictException e)  
        {  
            // Handle potential change conflict in whatever way  
            // is appropriate for your application.  
            // For more information, see the MSDN article  
            // How to: Manage Change Conflicts (LINQ to SQL)/  
        }   
    }  
}  

To update a collection, call AttachAll instead of Attach.

Expected Entity Members

As stated previously, only certain members of the entity object are required to be set before you call the Attach methods. Entity members that are required to be set must fulfill the following criteria:

  • Be part of the entity’s identity.

  • Be expected to be modified.

  • Be a timestamp or have its UpdateCheck attribute set to something besides Never.

If a table uses a timestamp or version number for an optimistic concurrency check, you must set those members before you call Attach. A member is dedicated for optimistic concurrency checking when the IsVersion property is set to true on that Column attribute. Any requested updates will be submitted only if the version number or timestamp values are the same on the database.

A member is also used in the optimistic concurrency check as long as the member does not have UpdateCheck set to Never. The default value is Always if no other value is specified.

If any one of these required members is missing, a ChangeConflictException is thrown during SubmitChanges ("Row not found or changed").

State

After an entity object is attached to the DataContext instance, the object is considered to be in the PossiblyModified state. There are three ways to force an attached object to be considered Modified.

  1. Attach it as unmodified, and then directly modify the fields.

  2. Attach it with the Attach overload that takes current and original object instances. This supplies the change tracker with old and new values so that it will automatically know which fields have changed.

  3. Attach it with the Attach overload that takes a second Boolean parameter (set to true). This will tell the change tracker to consider the object modified without having to supply any original values. In this approach, the object must have a version/timestamp field.

For more information, see Object States and Change-Tracking.

If an entity object already occurs in the ID Cache with the same identity as the object being attached, a DuplicateKeyException is thrown.

When you attach with an IEnumerable set of objects, a DuplicateKeyException is thrown when an already existing key is present. Remaining objects are not attached.

See also

Object Identity

Objects in the runtime have unique identities. Two variables that refer to the same object actually refer to the same instance of the object. Because of this fact, changes that you make by way of a path through one variable are immediately visible through the other.

Rows in a relational database table do not have unique identities. Because each row has a unique primary key, no two rows share the same key value. However, this fact constrains only the contents of the database table.

In reality, data is most often brought out of the database and into a different tier, where an application works with it. This is the model that LINQ to SQL supports. When data is brought out of the database as rows, you have no expectation that two rows that represent the same data actually correspond to the same row instances. If you query for a specific customer two times, you get two rows of data. Each row contains the same information.

With objects you expect something very different. You expect that if you ask the DataContext for the same information repeatedly, it will in fact give you the same object instance. You expect this behavior because objects have special meaning for your application and you expect them to behave like objects. You designed them as hierarchies or graphs. You expect to retrieve them as such and not to receive multitudes of replicated instances just because you asked for the same thing more than one time.

In LINQ to SQL, the DataContext manages object identity. Whenever you retrieve a new row from the database, the row is logged in an identity table by its primary key, and a new object is created. Whenever you retrieve that same row, the original object instance is handed back to the application. In this manner the DataContext translates the concept of identity as seen by the database (that is, primary keys) into the concept of identity seen by the language (that is, instances). The application only sees the object in the state that it was first retrieved. The new data, if different, is discarded. For more information, see Retrieving Objects from the Identity Cache.

LINQ to SQL uses this approach to manage the integrity of local objects in order to support optimistic updates. Because the only changes that occur after the object is at first created are those made by the application, the intent of the application is clear. If changes by an outside party have occurred in the interim, they are identified at the time SubmitChanges() is called.

Note

If the object requested by the query is easily identifiable as one already retrieved, no query is executed. The identity table acts as a cache of all previously retrieved objects.

Examples

Object Caching Example 1

In this example, if you execute the same query two times, you receive a reference to the same object in memory every time.

C#
Customer cust1 =
    (from cust in db.Customers
     where cust.CustomerID == "BONAP"
     select cust).First();

Customer cust2 =
    (from cust in db.Customers
     where cust.CustomerID == "BONAP"
     select cust).First();

Object Caching Example 2

In this example, if you execute different queries that return the same row from the database, you receive a reference to the same object in memory every time.

C#
Customer cust1 =
    (from cust in db.Customers
     where cust.CustomerID == "BONAP"
     select cust).First();

Customer cust2 =
    (from ord in db.Orders
     where ord.Customer.CustomerID == "BONAP"
     select ord).First().Customer;

See also

The LINQ to SQL Object Model

In LINQ to SQL, an object model expressed in the programming language of the developer is mapped to the data model of a relational database. Operations on the data are then conducted according to the object model.

In this scenario, you do not issue database commands (for example, INSERT) to the database. Instead, you change values and execute methods within your object model. When you want to query the database or send it changes, LINQ to SQL translates your requests into the correct SQL commands and sends those commands to the database.

Screenshot that shows the Linq Object Model.

The most fundamental elements in the LINQ to SQL object model and their relationship to elements in the relational data model are summarized in the following table:

LINQ to SQL Object Model Relational Data Model
Entity class Table
Class member Column
Association Foreign-key relationship
Method Stored Procedure or Function

Note

The following descriptions assume that you have a basic knowledge of the relational data model and rules.

LINQ to SQL Entity Classes and Database Tables

In LINQ to SQL, a database table is represented by an entity class. An entity class is like any other class you might create except that you annotate the class by using special information that associates the class with a database table. You make this annotation by adding a custom attribute (TableAttribute) to your class declaration, as in the following example:

Example

C#
[Table(Name = "Customers")]
public class Customerzz
{
    public string CustomerID;
    // ...
    public string City;
}

Only instances of classes declared as tables (that is, entity classes) can be saved to the database.

For more information, see the Table Attribute section of Attribute-Based Mapping.

LINQ to SQL Class Members and Database Columns

In addition to associating classes with tables, you designate fields or properties to represent database columns. For this purpose, LINQ to SQL defines the ColumnAttribute attribute, as in the following example:

Example

C#
[Table(Name = "Customers")]
public class Customer
{
    [Column(IsPrimaryKey = true)]
    public string CustomerID;
    [Column]
    public string City;
}

Only fields and properties mapped to columns are persisted to or retrieved from the database. Those not declared as columns are considered as transient parts of your application logic.

The ColumnAttribute attribute has a variety of properties that you can use to customize these members that represent columns (for example, designating a member as representing a primary key column). For more information, see the Column Attribute section of Attribute-Based Mapping.

LINQ to SQL Associations and Database Foreign-key Relationships

In LINQ to SQL, you represent database associations (such as foreign-key to primary-key relationships) by applying the AssociationAttribute attribute. In the following segment of code, the Order class contains a Customer property that has an AssociationAttribute attribute. This property and its attribute provide the Order class with a relationship to the Customer class.

The following code example shows the Customer property from the Order class.

Example

C#
[Association(Name="FK_Orders_Customers", Storage="_Customer", ThisKey="CustomerID", IsForeignKey=true)]
public Customer Customer
{
    get
    {
        return this._Customer.Entity;
    }
    set
    {
        Customer previousValue = this._Customer.Entity;
        if (((previousValue != value) 
                    || (this._Customer.HasLoadedOrAssignedValue == false)))
        {
            this.SendPropertyChanging();
            if ((previousValue != null))
            {
                this._Customer.Entity = null;
                previousValue.Orders.Remove(this);
            }
            this._Customer.Entity = value;
            if ((value != null))
            {
                value.Orders.Add(this);
                this._CustomerID = value.CustomerID;
            }
            else
            {
                this._CustomerID = default(string);
            }
            this.SendPropertyChanged("Customer");
        }
    }
}

For more information, see the Association Attribute section of Attribute-Based Mapping.

LINQ to SQL Methods and Database Stored Procedures

LINQ to SQL supports stored procedures and user-defined functions. In LINQ to SQL, you map these database-defined abstractions to client objects so that you can access them in a strongly typed manner from client code. The method signatures resemble as closely as possible the signatures of the procedures and functions defined in the database. You can use IntelliSense to discover these methods.

A result set that is returned by a call to a mapped procedure is a strongly typed collection.

LINQ to SQL maps stored procedures and functions to methods by using the FunctionAttribute and ParameterAttribute attributes. Methods representing stored procedures are distinguished from those representing user-defined functions by the IsComposable property. If this property is set to false (the default), the method represents a stored procedure. If it is set to true, the method represents a database function.

Note

If you are using Visual Studio, you can use the Object Relational Designer to create methods mapped to stored procedures and user-defined functions.

Example

C#
// This is an example of a stored procedure in the Northwind
// sample database. The IsComposable property defaults to false.
[Function(Name="dbo.CustOrderHist")]
public ISingleResult<CustOrderHistResult> CustOrderHist([Parameter(Name="CustomerID", DbType="NChar(5)")] string customerID)
{
    IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), customerID);
    return ((ISingleResult<CustOrderHistResult>)(result.ReturnValue));
}

For more information, see the Function Attribute, Stored Procedure Attribute, and Parameter Attribute sections of Attribute-Based Mapping and Stored Procedures.

See also

Object States and Change-Tracking

LINQ to SQL objects always participate in some state. For example, when LINQ to SQL creates a new object, the object is in Unchanged state. A new object that you yourself create is unknown to the DataContext and is in Untracked state. Following successful execution of SubmitChanges, all objects known to LINQ to SQL are in Unchanged state. (The single exception is represented by those that have been successfully deleted from the database, which are in Deleted state and unusable in that DataContext instance.)

Object States

The following table lists the possible states for LINQ to SQL objects.

State Description
Untracked An object not tracked by LINQ to SQL. Examples include the following:

- An object not queried through the current DataContext (such as a newly created object).
- An object created through deserialization
- An object queried through a different DataContext.
Unchanged An object retrieved by using the current DataContext and not known to have been modified since it was created.
PossiblyModified An object which is attached to a DataContext. For more information, see Data Retrieval and CUD Operations in N-Tier Applications (LINQ to SQL).
ToBeInserted An object not retrieved by using the current DataContext. This causes a database INSERT during SubmitChanges.
ToBeUpdated An object known to have been modified since it was retrieved. This causes a database UPDATE during SubmitChanges.
ToBeDeleted An object marked for deletion, causing a database DELETE during SubmitChanges.
Deleted An object that has been deleted in the database. This state is final and does not allow for additional transitions.

Inserting Objects

You can explicitly request Inserts by using InsertOnSubmit. Alternatively, LINQ to SQL can infer Inserts by finding objects connected to one of the known objects that must be updated. For example, if you add an Untracked object to an EntitySet<TEntity> or set an EntityRef<TEntity> to an Untracked object, you make the Untracked object reachable by way of tracked objects in the graph. While processing SubmitChanges, LINQ to SQL traverses the tracked objects and discovers any reachable persistent objects that are not tracked. Such objects are candidates for insertion into the database.

For classes in an inheritance hierarchy, InsertOnSubmit(o) also sets the value of the member designated as the discriminator to match the type of the object o. In the case of a type matching the default discriminator value, this action causes the discriminator value to be overwritten with the default value. For more information, see Inheritance Support.

Important

An object added to a Table is not in the identity cache. The identity cache reflects only what is retrieved from the database. After a call to InsertOnSubmit, the added entity does not appear in queries against the database until SubmitChanges is successfully completed.

Deleting Objects

You mark a tracked object o for deletion by calling DeleteOnSubmit(o) on the appropriate Table<TEntity>. LINQ to SQL considers the removal of an object from an EntitySet<TEntity> as an update operation, and the corresponding foreign key value is set to null. The target of the operation (o) is not deleted from its table. For example, cust.Orders.DeleteOnSubmit(ord) indicates an update where the relationship between cust and ord is severed by setting the foreign key ord.CustomerID to null. It does not cause the deletion of the row corresponding to ord.

LINQ to SQL performs the following processing when an object is deleted (DeleteOnSubmit) from its table:

  • When SubmitChanges is called, a DELETE operation is performed for that object.

  • The removal is not propagated to related objects regardless of whether they are loaded. Specifically, related objects are not loaded for updating the relationship property.

  • After successful execution of SubmitChanges, the objects are set to the Deleted state. As a result, you cannot use the object or its id in that DataContext. The internal cache maintained by a DataContext instance does not eliminate objects that are retrieved or added as new, even after the objects have been deleted in the database.

You can call DeleteOnSubmit only on an object tracked by the DataContext. For an Untracked object, you must call Attach before you call DeleteOnSubmit. Calling DeleteOnSubmit on an Untracked object throws an exception.

Note

Removing an object from a table tells LINQ to SQL to generate a corresponding SQL DELETE command at the time of SubmitChanges. This action does not remove the object from the cache or propagate the deletion to related objects.

To reclaim the id of a deleted object, use a new DataContext instance. For cleanup of related objects, you can use the cascade delete feature of the database, or else manually delete the related objects.

The related objects do not have to be deleted in any special order (unlike in the database).

Updating Objects

You can detect Updates by observing notifications of changes. Notifications are provided through the PropertyChanging event in property setters. When LINQ to SQL is notified of the first change to an object, it creates a copy of the object and considers the object a candidate for generating an Update statement.

For objects that do not implement INotifyPropertyChanging, LINQ to SQL maintains a copy of the values that objects had when they were first materialized. When you call SubmitChanges, LINQ to SQL compares the current and original values to decide whether the object has been changed.

For updates to relationships, the reference from the child to the parent (that is, the reference corresponding to the foreign key) is considered the authority. The reference in the reverse direction (that is, from parent to child) is optional. Relationship classes (EntitySet<TEntity> and EntityRef<TEntity>) guarantee that the bidirectional references are consistent for one-to-many and one-to-one relationships. If the object model does not use EntitySet<TEntity> or EntityRef<TEntity>, and if the reverse reference is present, it is your responsibility to keep it consistent with the forward reference when the relationship is updated.

If you update both the required reference and the corresponding foreign key, you must make sure that they agree. An InvalidOperationException exception is thrown if the two are not synchronized at the time that you call SubmitChanges. Although foreign key value changes are sufficient for affecting an update of the underlying row, you should change the reference to maintain connectivity of the object graph and bidirectional consistency of relationships.

See also

Optimistic Concurrency: Overview

LINQ to SQL supports optimistic concurrency control. The following table describes terms that apply to optimistic concurrency in LINQ to SQL documentation:

Terms Description
concurrency The situation in which two or more users at the same time try to update the same database row.
concurrency conflict The situation in which two or more users at the same time try to submit conflicting values to one or more columns of a row.
concurrency control The technique used to resolve concurrency conflicts.
optimistic concurrency control The technique that first investigates whether other transactions have changed values in a row before permitting changes to be submitted.

Contrast with pessimistic concurrency control, which locks the record to avoid concurrency conflicts.

Optimistic control is so termed because it considers the chances of one transaction interfering with another to be unlikely.
conflict resolution The process of refreshing a conflicting item by querying the database again and then reconciling differences.

When an object is refreshed, the LINQ to SQL change tracker holds the following data:

- The values originally taken from the database and used for the update check.
- The new database values from the subsequent query.

LINQ to SQL then determines whether the object is in conflict (that is, whether one or more of its member values has changed). If the object is in conflict, LINQ to SQL next determines which of its members are in conflict.

Any member conflict that LINQ to SQL discovers is added to a conflict list.

In the LINQ to SQL object model, an optimistic concurrency conflict occurs when both of the following conditions are true:

  • The client tries to submit changes to the database.

  • One or more update-check values have been updated in the database since the client last read them.

Resolution of this conflict includes discovering which members of the object are in conflict, and then deciding what you want to do about it.

Note

Only members mapped as Always or WhenChanged participate in optimistic concurrency checks. No check is performed for members marked Never. For more information, see UpdateCheck.

Example

For example, in the following scenario, User1 starts to prepare an update by querying the database for a row. User1 receives a row with values of Alfreds, Maria, and Sales.

User1 wants to change the value of the Manager column to Alfred and the value of the Department column to Marketing. Before User1 can submit those changes, User2 has submitted changes to the database. So now the value of the Assistant column has been changed to Mary and the value of the Department column to Service.

When User1 now tries to submit changes, the submission fails and a ChangeConflictException exception is thrown. This result occurs because the database values for the Assistant column and the Department column are not those that were expected. Members representing the Assistant and Department columns are in conflict. The following table summarizes the situation.

Manager Assistant Department
Original state Alfreds Maria Sales
User1 Alfred Marketing
User2 Mary Service

You can resolve conflicts such as this in different ways. For more information, see How to: Manage Change Conflicts.

Conflict Detection and Resolution Checklist

You can detect and resolve conflicts at any level of detail. At one extreme, you can resolve all conflicts in one of three ways (see RefreshMode) without additional consideration. At the other extreme, you can designate a specific action for each type of conflict on every member in conflict.

LINQ to SQL Types That Support Conflict Discovery and Resolution

Classes and features to support the resolution of conflicts in optimistic concurrency in LINQ to SQL include the following:

See also

Query Concepts

This section describes key concepts for designing LINQ queries in LINQ to SQL.

In This Section

LINQ to SQL Queries
Refers to general LINQ topics, and explains items specific to LINQ to SQL.

Querying Across Relationships
Explains how to use associations in the LINQ to SQL object model.

Remote vs. Local Execution
Explains how to specify where you want your query executed.

Deferred versus Immediate Loading
Describes how to specify when related objects are loaded.

Related Sections

Programming Guide
Contains links to topics that explain the LINQ to SQL technology.

Object Identity
Explains the concept of object identity in LINQ to SQL.

Introduction to LINQ Queries (C#)
Provides an introduction to query operations in LINQ.

LINQ to SQL Queries

You define LINQ to SQL queries by using the same syntax as you would in LINQ. The only difference is that the objects referenced in your queries are mapped to elements in a database. For more information, see Introduction to LINQ Queries (C#).

LINQ to SQL translates the queries you write into equivalent SQL queries and sends them to the server for processing. More specifically, your application uses the LINQ to SQL API to request query execution. The LINQ to SQL provider then transforms the query into SQL text and delegates execution to the ADO provider. The ADO provider returns query results as a DataReader. The LINQ to SQL provider translates the ADO results to an IQueryable collection of user objects.

Note

Most methods and operators on .NET Framework built-in types have direct translations to SQL. Those that LINQ cannot translate generate run-time exceptions. For more information, see SQL-CLR Type Mapping.

The following table shows the similarities and differences between LINQ and LINQ to SQL query items.

Item LINQ Query LINQ to SQL Query
Return type of the local variable that holds the query (for queries that return sequences) Generic IEnumerable Generic IQueryable
Specifying the data source Uses the From (Visual Basic) or from (C#) clause Same
Filtering Uses the Where/where clause Same
Grouping Uses the Group…By/groupby clause Same
Selecting (Projecting) Uses the Select/select clause Same
Deferred versus immediate execution See Introduction to LINQ Queries (C#) Same
Implementing joins Uses the Join/join clause Can use the Join/join clause, but more effectively uses the AssociationAttribute attribute. For more information, see Querying Across Relationships.
Remote versus local execution For more information, see Remote vs. Local Execution.
Streaming versus cached querying Not applicable in a local memory scenario

See also

Querying Across Relationships

References to other objects or collections of other objects in your class definitions directly correspond to foreign-key relationships in the database. You can use these relationships when you query by using dot notation to access the relationship properties and navigate from one object to another. These access operations translate to more complex joins or correlated subqueries in the equivalent SQL.

For example, the following query navigates from orders to customers as a way to restrict the results to only those orders for customers located in London.

C#
        Northwnd db = new Northwnd(@"northwnd.mdf");

        IQueryable<Order> londonOrderQuery =
from ord in db.Orders
where ord.Customer.City == "London"
select ord;

If relationship properties did not exist you would have to write them manually as joins, just as you would do in a SQL query, as in the following code:

C#
        Northwnd db = new Northwnd(@"northwnd.mdf");
        IQueryable<Order> londonOrderQuery =
from cust in db.Customers
join ord in db.Orders on cust.CustomerID equals ord.CustomerID
where cust.City == "London"
select ord;

You can use the relationship property to define this particular relationship one time. You can then use the more convenient dot syntax. But relationship properties exist more importantly because domain-specific object models are typically defined as hierarchies or graphs. The objects that you program against have references to other objects. It is only a happy coincidence that object-to-object relationships correspond to foreign-key-styled relationships in databases. Property access then provides a convenient way to write joins.

With regard to this, relationship properties are more important on the results side of a query than as part of the query itself. After the query has retrieved data about a particular customer, the class definition indicates that customers have orders. In other words, you expect the Orders property of a particular customer to be a collection that is populated with all the orders from that customer. That is in fact the contract you declared by defining the classes in this manner. You expect to see the orders there even if the query did not request orders. You expect your object model to maintain an illusion that it is an in-memory extension of the database with related objects immediately available.

Now that you have relationships, you can write queries by referring to the relationship properties defined in your classes. These relationship references correspond to foreign-key relationships in the database. Operations that use these relationships translate to more complex joins in the equivalent SQL. As long as you have defined a relationship (using the AssociationAttribute attribute), you do not have to code an explicit join in LINQ to SQL.

To help maintain this illusion, LINQ to SQL implements a technique called deferred loading. For more information, see Deferred versus Immediate Loading.

Consider the following SQL query to project a list of CustomerID-OrderID pairs:

SELECT t0.CustomerID, t1.OrderID  
FROM   Customers AS t0 INNER JOIN  
          Orders AS t1 ON t0.CustomerID = t1.CustomerID  
WHERE  (t0.City = @p0)  

To obtain the same results by using LINQ to SQL, you use the Orders property reference already existing in the Customer class. The Orders reference provides the necessary information to execute the query and project the CustomerID-OrderID pairs, as in the following code:

C#
        Northwnd db = new Northwnd(@"northwnd.mdf");
        var idQuery =
from cust in db.Customers
from ord in cust.Orders
where cust.City == "London"
select new { cust.CustomerID, ord.OrderID };

You can also do the reverse. That is, you can query Orders and use its Customer relationship reference to access information about the associated Customer object. The following code projects the same CustomerID-OrderID pairs as before, but this time by querying Orders instead of Customers.

C#
        Northwnd db = new Northwnd(@"northwnd.mdf");
        var idQuery =
from ord in db.Orders
where ord.Customer.City == "London"
select new { ord.Customer.CustomerID, ord.OrderID };

See also

Remote vs. Local Execution

You can decide to execute your queries either remotely (that is, the database engine executes the query against the database) or locally (LINQ to SQL executes the query against a local cache).

Remote Execution

Consider the following query:

C#
            Northwnd db = new Northwnd(@"northwnd.mdf");
            Customer c = db.Customers.Single(x => x.CustomerID == "19283");
foreach (Order ord in 
    c.Orders.Where(o => o.ShippedDate.Value.Year == 1998))
{
    // Do something.
}

If your database has thousands of rows of orders, you do not want to retrieve them all to process a small subset. In LINQ to SQL, the EntitySet<TEntity> class implements the IQueryable interface. This approach makes sure that such queries can be executed remotely. Two major benefits flow from this technique:

  • Unnecessary data is not retrieved.

  • A query executed by the database engine is often more efficient because of the database indexes.

Local Execution

In other situations, you might want to have the complete set of related entities in the local cache. For this purpose, EntitySet<TEntity> provides the Load method to explicitly load all the members of the EntitySet<TEntity>.

If an EntitySet<TEntity> is already loaded, subsequent queries are executed locally. This approach helps in two ways:

  • If the complete set must be used locally or multiple times, you can avoid remote queries and associated latencies.

  • The entity can be serialized as a complete entity.

The following code fragment illustrates how local execution can be obtained:

C#
            Northwnd db = new Northwnd(@"northwnd.mdf");
            Customer c = db.Customers.Single(x => x.CustomerID == "19283");
c.Orders.Load();

foreach (Order ord in 
    c.Orders.Where(o => o.ShippedDate.Value.Year == 1998))
{
    // Do something.
}

        }

Comparison

These two capabilities provide a powerful combination of options: remote execution for large collections and local execution for small collections or where the complete collection is needed. You implement remote execution through IQueryable, and local execution against an in-memory IEnumerable<T> collection. To force local execution (that is, IEnumerable<T>), see Convert a Type to a Generic IEnumerable.

Queries Against Unordered Sets

Note the important difference between a local collection that implements List<T> and a collection that provides remote queries executed against unordered sets in a relational database. List<T> methods such as those that use index values require list semantics, which typically cannot be obtained through a remote query against an unordered set. For this reason, such methods implicitly load the EntitySet<TEntity> to allow local execution.

See also

Deferred versus Immediate Loading

When you query for an object, you actually retrieve only the object you requested. The related objects are not automatically fetched at the same time. (For more information, see Querying Across Relationships.) You cannot see the fact that the related objects are not already loaded, because an attempt to access them produces a request that retrieves them.

For example, you might want to query for a particular set of orders and then only occasionally send an email notification to particular customers. You would not necessarily need initially to retrieve all customer data with every order. You can use deferred loading to defer retrieval of extra information until you absolutely have to. Consider the following example:

C#
    Northwnd db = new Northwnd(@"northwnd.mdf");

    IQueryable<Order> notificationQuery =
    from ord in db.Orders
 where ord.ShipVia == 3
  select ord;

    foreach (Order ordObj in notificationQuery)
    {
        if (ordObj.Freight > 200)
            SendCustomerNotification(ordObj.Customer);
        ProcessOrder(ordObj);
    }

}

The opposite might also be true. You might have an application that has to view customer and order data at the same time. You know you need both sets of data. You know your application needs order information for each customer as soon as you get the results. You would not want to submit individual queries for orders for every customer. What you really want is to retrieve the order data together with the customers.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

db.DeferredLoadingEnabled = false;

IQueryable<Customer> custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

foreach (Customer custObj in custQuery)
{
    foreach (Order ordObj in custObj.Orders)
    {
        ProcessCustomerOrder(ordObj);
    }
}

You can also join customers and orders in a query by forming the cross-product and retrieving all the relative bits of data as one large projection. But these results are not entities. (For more information, see The LINQ to SQL Object Model). Entities are objects that have identity and that you can modify, whereas these results would be projections that cannot be changed and persisted. Even worse, you would be retrieving lots of redundant data as each customer repeats for each order in the flattened join output.

What you really need is a way to retrieve a set of related objects at the same time. The set is a delineated section of a graph so that you would never be retrieving more or less than was necessary for your intended use. For this purpose, LINQ to SQL provides DataLoadOptions for immediate loading of a region of your object model. Methods include:

  • The LoadWith method, to immediately load data related to the main target.

  • The AssociateWith method, to filter objects retrieved for a particular relationship.

See also

Retrieving Objects from the Identity Cache

This topic describes the types of LINQ to SQL queries that return an object from the identity cache that is managed by the DataContext.

In LINQ to SQL, one of the ways in which the DataContext manages objects is by logging object identities in an identity cache as queries are executed. In some cases, LINQ to SQL will attempt to retrieve an object from the identity cache before executing a query in the database.

In general, for a LINQ to SQL query to return an object from the identity cache, the query must be based on the primary key of an object and must return a single object. In particular, the query must be in one of the general forms shown below.

Note

Pre-compiled queries will not return objects from the identity cache. For more information about pre-compiled queries, see CompiledQuery and How to: Store and Reuse Queries.

A query must be in one of the following general forms to retrieve an object from the identity cache:

In these general forms, Function1, Function2, and predicate are defined as follows.

Function1 can be any of the following:

Function2 can be any of the following:

predicate must be an expression in which the object's primary key property is set to a constant value. If an object has a primary key defined by more than one property, each primary key property must be set to a constant value. The following are examples of the form predicate must take:

  • c => c.PK == constant_value

  • c => c.PK1 == constant_value1 && c=> c.PK2 == constant_value2

Example

The following code provides examples of the types of LINQ to SQL queries that retrieve an object from the identity cache.

C#
NorthwindDataContext context = new NorthwindDataContext();

// This query does not retrieve an object from
// the query cache because it is the first query.
// There are no objects in the cache. 
var a = context.Customers.First();
Console.WriteLine("First query gets customer {0}. ", a.CustomerID);

// This query returns an object from the query cache.
var b = context.Customers.Where(c => c.CustomerID == a.CustomerID);
foreach (var customer in b )
{
    Console.WriteLine(customer.CustomerID);
}

// This query returns an object from the identity cache.
// Note that calling FirstOrDefault(), Single(), or SingleOrDefault()
// instead of First() will also return an object from the cache.
var x = context.Customers.
    Where(c => c.CustomerID == a.CustomerID).
    First();
Console.WriteLine(x.CustomerID);

// This query returns an object from the identity cache.
// Note that calling FirstOrDefault(), Single(), or SingleOrDefault()
// instead of First() (each with the same predicate) will also
// return an object from the cache.
var y = context.Customers.First(c => c.CustomerID == a.CustomerID);
Console.WriteLine(y.CustomerID);

See also

Security in LINQ to SQL

Security risks are always present when you connect to a database. Although LINQ to SQL may include some new ways to work with data in SQL Server, it does not provide any additional security mechanisms.

Access Control and Authentication

LINQ to SQL does not have its own user model or authentication mechanisms. Use SQL Server Security to control access to the database, database tables, views, and stored procedures that are mapped to your object model. Grant the minimally required access to users and require strong passwords for user authentication.

Mapping and Schema Information

SQL-CLR type mapping and database schema information in your object model or external mapping file is available for all with access to those files in the file system. Assume that schema information will be available to all who can access the object model or external mapping file. To prevent more widespread access to schema information, use file security mechanisms to secure source files and mapping files.

Connection Strings

Using passwords in connection strings should be avoided whenever possible. Not only is a connection string a security risk in its own right, but the connection string may also be added in clear text to the object model or external mapping file when using the Object Relational Designer or SQLMetal command-line tool. Anyone with access to the object model or external mapping file via the file system could see the connection password (if it is included in the connection string).

To minimize such risks, use integrated security to make a trusted connection with SQL Server. By using this approach, you do not have to store a password in the connection string. For more information, see SQL Server Security.

In the absence of integrated security, a clear-text password will be needed in the connection string. The best way to help secure your connection string, in increasing order of risk, is as follows:

  • Use integrated security.

  • Secure connection strings with passwords and minimize passing around connection strings.

  • Use a System.Data.SqlClient.SqlConnection class instead of a connection string since it limits the duration of exposure. The LINQ to SQL System.Data.Linq.DataContext class can be instantiated using a SqlConnection.

  • Minimize lifetimes and touch points for all connection strings.

See also

Serialization

This topic describes LINQ to SQL serialization capabilities. The paragraphs that follow provide information about how to add serialization during code generation at design time and the run-time serialization behavior of LINQ to SQL classes.

You can add serialization code at design time by either of the following methods:

  • In the Object Relational Designer, change the Serialization Mode property to Unidirectional.

  • On the SQLMetal command line, add the /serialization option. For more information, see SqlMetal.exe (Code Generation Tool).

Overview

The code generated by LINQ to SQL provides deferred loading capabilities by default. Deferred loading is very convenient on the mid-tier for transparent loading of data on demand. However, it is problematic for serialization, because the serializer triggers deferred loading whether deferred loading is intended or not. In effect, when an object is serialized, its transitive closure under all outbound defer-loaded references is serialized.

The LINQ to SQL serialization feature addresses this problem, primarily through two mechanisms:

Definitions

  • DataContract serializer: Default serializer used by the Windows Communication Framework (WCF) component of the .NET Framework 3.0 or later versions.

  • Unidirectional serialization: The serialized version of a class that contains only a one-way association property (to avoid a cycle). By convention, the property on the parent side of a primary-foreign key relationship is marked for serialization. The other side in a bidirectional association is not serialized.

    Unidirectional serialization is the only type of serialization supported by LINQ to SQL.

Code Example

The following code uses the traditional Customer and Order classes from the Northwind sample database, and shows how these classes are decorated with serialization attributes.

C#
// The class is decorated with the DataContract attribute.
[Table(Name="dbo.Customers")]
[DataContract()]
public partial class Customer : INotifyPropertyChanging, INotifyPropertyChanged
{
C#
// Private fields are not decorated with any attributes, and are
// elided.
private string _CustomerID;

// Public properties are decorated with the DataMember
// attribute and the Order property specifying the serial
// number. See the Order class later in this topic for
// exceptions.
public Customer()
{
    this.Initialize();
}

[Column(Storage="_CustomerID", DbType="NChar(5) NOT NULL", CanBeNull=false, IsPrimaryKey=true)]
[DataMember(Order=1)]
public string CustomerID
{
    get
    {
        return this._CustomerID;
    }
    set
    {
        if ((this._CustomerID != value))
        {
            this.OnCustomerIDChanging(value);
            this.SendPropertyChanging();
            this._CustomerID = value;
            this.SendPropertyChanged("CustomerID");
            this.OnCustomerIDChanged();
        }
    }
}
C#
// The following Association property is decorated with
// DataMember because it is the parent side of the
// relationship. The reverse property in the Order class
// does not have a DataMember attribute. This factor
// prevents a 'cycle.'
[Association(Name="FK_Orders_Customers", Storage="_Orders", OtherKey="CustomerID", DeleteRule="NO ACTION")]
[DataMember(Order=13)]
public EntitySet<Order> Orders
{
    get
    {
        return this._Orders;
    }
    set
    {
        this._Orders.Assign(value);
    }
}

For the Order class in the following example, only the reverse association property corresponding to the Customer class is shown for brevity. It does not have a DataMemberAttribute attribute to avoid a cycle.

C#
// The class for the Orders table is also decorated with the
// DataContract attribute.
[Table(Name="dbo.Orders")]
[DataContract()]
public partial class Order : INotifyPropertyChanging, INotifyPropertyChanged
C#
// Private fields for the Orders table are not decorated with
// any attributes, and are elided.
private int _OrderID;

// Public properties are decorated with the DataMember
// attribute.
// The reverse Association property on the side of the
// foreign key does not have the DataMember attribute.
[Association(Name = "FK_Orders_Customers", Storage = "_Customer", ThisKey = "CustomerID", IsForeignKey = true)]
public Customer Customer

How to Serialize the Entities

You can serialize the entities in the codes shown in the previous section as follows;

C#
Northwnd db = new Northwnd(@"c\northwnd.mdf");

Customer cust = db.Customers.Where(c => c.CustomerID ==
    "ALFKI").Single();

DataContractSerializer dcs = 
    new DataContractSerializer(typeof(Customer));
StringBuilder sb = new StringBuilder();
XmlWriter writer = XmlWriter.Create(sb);
dcs.WriteObject(writer, cust);
writer.Close();
string xml = sb.ToString();

Self-Recursive Relationships

Self-recursive relationships follow the same pattern. The association property corresponding to the foreign key does not have a DataMemberAttribute attribute, whereas the parent property does.

Consider the following class that has two self-recursive relationships: Employee.Manager/Reports and Employee.Mentor/Mentees.

C#
// No DataMember attribute.
public Employee Manager;
[DataMember(Order = 3)]
public EntitySet<Employee> Reports;

// No DataMember
public Employee Mentor;
[DataMember(Order = 5)]
public EntitySet<Employee> Mentees;

See also

Stored Procedures

LINQ to SQL uses methods in your object model to represent stored procedures in the database. You designate methods as stored procedures by applying the FunctionAttribute attribute and, where required, the ParameterAttribute attribute. For more information, see The LINQ to SQL Object Model.

Developers using Visual Studio would typically use the Object Relational Designer to map stored procedures. The topics in this section show how to form and call these methods in your application if you write the code yourself.

In This Section

How to: Return Rowsets
Describes how to return rows of data and shows how to use an input parameter.

How to: Use Stored Procedures that Take Parameters
Describes how to use input and output parameters.

How to: Use Stored Procedures Mapped for Multiple Result Shapes
Describes how to provide for returns of multiple shapes in the same stored procedure.

How to: Use Stored Procedures Mapped for Sequential Result Shapes
Describes how to provide for multiple shapes where the return sequence is known.

Customizing Operations By Using Stored Procedures
Describes how to use stored procedures to implement insert, update, and delete operations.

Customizing Operations by Using Stored Procedures Exclusively
Describes how to use nothing but stored procedures to implement insert, update, and delete operations.

Related Sections

Programming Guide
Provides information about how to create and use your LINQ to SQL object model.

Walkthrough: Using Only Stored Procedures (Visual Basic)
Includes procedures that illustrate how to use stored procedures in Visual Basic.

Walkthrough: Using Only Stored Procedures (C#)
Includes procedures that illustrate how to use stored procedures in C#.

How to: Return Rowsets

This example returns a rowset from the database, and includes an input parameter to filter the result.

When you execute a stored procedure that returns a rowset, you use a result class that stores the returns from the stored procedure. For more information, see Analyzing LINQ to SQL Source Code.

Example

The following example represents a stored procedure that returns rows of customers and uses an input parameter to return only those rows that list "London" as the customer city. The example assumes an enumerable CustomersByCityResult class.

CREATE PROCEDURE [dbo].[Customers By City]  
    (@param1 NVARCHAR(20))  
AS  
BEGIN  
    -- SET NOCOUNT ON added to prevent extra result sets from  
    -- interfering with SELECT statements.  
    SET NOCOUNT ON;  
    SELECT CustomerID, ContactName, CompanyName, City from Customers  
        as c where c.City=@param1  
END  
C#
[Function(Name="dbo.Customers By City")]
public ISingleResult<CustomersByCityResult> CustomersByCity([Parameter(DbType="NVarChar(20)")] string param1)
{
    IExecuteResult result = this.ExecuteMethodCall(this,         ((MethodInfo)(MethodInfo.GetCurrentMethod())), param1);
    return ((ISingleResult<CustomersByCityResult>)(result.ReturnValue));
}

// Call the stored procedure.
void ReturnRowset()
{
    Northwnd db = new Northwnd(@"c:\northwnd.mdf");

    ISingleResult<CustomersByCityResult> result =
        db.CustomersByCity("London");

    foreach (CustomersByCityResult cust in result)
    {
        Console.WriteLine("CustID={0}; City={1}", cust.CustomerID,
            cust.City);
    }
}

See also

How to: Use Stored Procedures that Take Parameters

LINQ to SQL maps output parameters to reference parameters, and for value types declares the parameter as nullable.

For an example of how to use an input parameter in a query that returns a rowset, see How to: Return Rowsets.

Example

The following example takes a single input parameter (the customer ID) and returns an out parameter (the total sales for that customer).

CREATE PROCEDURE [dbo].[CustOrderTotal]   
@CustomerID nchar(5),  
@TotalSales money OUTPUT  
AS  
SELECT @TotalSales = SUM(OD.UNITPRICE*(1-OD.DISCOUNT) * OD.QUANTITY)  
FROM ORDERS O, "ORDER DETAILS" OD  
where O.CUSTOMERID = @CustomerID AND O.ORDERID = OD.ORDERID  
C#
[Function(Name="dbo.CustOrderTotal")]
[return: Parameter(DbType="Int")]
public int CustOrderTotal([Parameter(Name="CustomerID", DbType="NChar(5)")] string customerID, [Parameter(Name="TotalSales", DbType="Money")] ref System.Nullable<decimal> totalSales)
{
    IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), customerID, totalSales);
    totalSales = ((System.Nullable<decimal>)(result.GetParameterValue(1)));
    return ((int)(result.ReturnValue));
}

Example

You would call this stored procedure as follows:

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");
decimal? totalSales = 0;
db.CustOrderTotal("alfki", ref totalSales);

Console.WriteLine(totalSales);

See also

How to: Use Stored Procedures Mapped for Multiple Result Shapes

When a stored procedure can return multiple result shapes, the return type cannot be strongly typed to a single projection shape. Although LINQ to SQL can generate all possible projection types, it cannot know the order in which they will be returned.

Contrast this scenario with stored procedures that produce multiple result shapes sequentially. For more information, see How to: Use Stored Procedures Mapped for Sequential Result Shapes.

The ResultTypeAttribute attribute is applied to stored procedures that return multiple result types to specify the set of types the procedure can return.

Example

In the following SQL code example, the result shape depends on the input (shape =1 or shape = 2). You do not know which projection will be returned first.

CREATE PROCEDURE VariableResultShapes(@shape int)  
AS  
if(@shape = 1)  
    select CustomerID, ContactTitle, CompanyName from customers  
else if(@shape = 2)  
    select OrderID, ShipName from orders  
C#
[Function(Name="dbo.VariableResultShapes")]
[ResultType(typeof(VariableResultShapesResult1))]
[ResultType(typeof(VariableResultShapesResult2))]
public IMultipleResults VariableResultShapes([Parameter(DbType="Int")] System.Nullable<int> shape)
{
    IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), shape);
    return ((IMultipleResults)(result.ReturnValue));
}

Example

You would use code similar to the following to execute this stored procedure.

Note

You must use the GetResult pattern to obtain an enumerator of the correct type, based on your knowledge of the stored procedure.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

// Assign the results of the procedure with an argument
// of (1) to local variable 'result'.
IMultipleResults result = db.VariableResultShapes(1);

// Iterate through the list and write results (the company names)
// to the console.
foreach(VariableResultShapesResult1 compName in
    result.GetResult<VariableResultShapesResult1>())
{
    Console.WriteLine(compName.CompanyName);
}

// Pause to view company names; press Enter to continue.
Console.ReadLine();

// Assign the results of the procedure with an argument
// of (2) to local variable 'result'.
IMultipleResults result2 = db.VariableResultShapes(2);

// Iterate through the list and write results (the order IDs)
// to the console.
foreach (VariableResultShapesResult2 ord in
    result2.GetResult<VariableResultShapesResult2>())
{
    Console.WriteLine(ord.OrderID);
}

See also

How to: Use Stored Procedures Mapped for Sequential Result Shapes

This kind of stored procedure can generate more than one result shape, but you know in what order the results are returned. Contrast this scenario with the scenario where you do not know the sequence of the returns. For more information, see How to: Use Stored Procedures Mapped for Multiple Result Shapes.

Example

Here is the T-SQL of a stored procedure that returns multiple result shapes sequentially:

CREATE PROCEDURE MultipleResultTypesSequentially  
AS  
select * from products  
select * from customers  
C#
[Function(Name="dbo.MultipleResultTypesSequentially")]
[ResultType(typeof(MultipleResultTypesSequentiallyResult1))]
[ResultType(typeof(MultipleResultTypesSequentiallyResult2))]
public IMultipleResults MultipleResultTypesSequentially()
{
    IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())));
    return ((IMultipleResults)(result.ReturnValue));
}

Example

You would use code similar to the following to execute this stored procedure.

C#
Northwnd db = new Northwnd(@"c:\northwnd.mdf");

IMultipleResults sprocResults =
    db.MultipleResultTypesSequentially();

// First read products.
foreach (Product prod in sprocResults.GetResult<Product>())
{
    Console.WriteLine(prod.ProductID);
}

// Next read customers.
foreach (Customer cust in sprocResults.GetResult<Customer>())
{
    Console.WriteLine(cust.CustomerID);
}

See also

Customizing Operations By Using Stored Procedures

Stored procedures represent a common approach to overriding default behavior. The examples in this topic show how you can use generated method wrappers for stored procedures, and how you can call stored procedures directly.

If you are using Visual Studio, you can use the Object Relational Designer to assign stored procedures to perform inserts, updates, and deletes.

Note

To read back database-generated values, use output parameters in your stored procedures. If you cannot use output parameters, write a partial method implementation instead of relying on overrides generated by the Object Relational Designer. Members mapped to database-generated values must be set to appropriate values after INSERT or UPDATE operations have successfully completed. For more information, see Responsibilities of the Developer In Overriding Default Behavior.

Example

Description

In the following example, assume that the Northwind class contains two methods to call stored procedures that are being used for overrides in a derived class.

Code

C#
[Function()]
public IEnumerable<Order> CustomerOrders(
    [Parameter(Name = "CustomerID", DbType = "NChar(5)")]
    string customerID)
{
    IExecuteResult result = this.ExecuteMethodCall(this,
        ((MethodInfo)(MethodInfo.GetCurrentMethod())),
        customerID);
    return ((IEnumerable<Order>)(result.ReturnValue));
}

[Function()]
public IEnumerable<Customer> CustomerById(
    [Parameter(Name = "CustomerID", DbType = "NChar(5)")]
    string customerID)
{
    IExecuteResult result = this.ExecuteMethodCall(this,
        ((MethodInfo)(MethodInfo.GetCurrentMethod())),
        customerID);
    return (IEnumerable<Customer>)(result.ReturnValue);
}

Example

Description

The following class uses these methods for the override.

Code

C#
public class NorthwindThroughSprocs : Northwnd
{

    public NorthwindThroughSprocs(string connection) :
        base(connection)
    {
    }

    // Override loading of Customer.Orders by using method wrapper.
    private IEnumerable<Order> LoadOrders(Customer customer)
    {
        return this.CustomerOrders(customer.CustomerID);
    }
    // Override loading of Order.Customer by using method wrapper.
    private Customer LoadCustomer(Order order)
    {
        return this.CustomerById(order.CustomerID).Single();
    }
    // Override INSERT operation on Customer by calling the
    // stored procedure directly.
    private void InsertCustomer(Customer customer)
    {
        // Call the INSERT stored procedure directly.
        this.ExecuteCommand("exec sp_insert_customer …");
    }
    // The UPDATE override works similarly, that is, by
    // calling the stored procedure directly.
    private void UpdateCustomer(Customer original, Customer current)
    {
        // Call the UPDATE stored procedure by using current
        // and original values.
        this.ExecuteCommand("exec sp_update_customer …");
    }
    // The DELETE override works similarly.
    private void DeleteCustomer(Customer customer)
    {
        // Call the DELETE stored procedure directly.
        this.ExecuteCommand("exec sp_delete_customer …");
    }
}

Example

Description

You can use NorthwindThroughSprocs exactly as you would use Northwnd.

Code

C#
NorthwindThroughSprocs db = new NorthwindThroughSprocs("");
var custQuery =
    from cust in db.Customers
    where cust.City == "London"
    select cust;

foreach (Customer custObj in custQuery)
    // deferred loading of cust.Orders uses the override LoadOrders.
    foreach (Order ord in custObj.Orders)
        // ...
        // Make some changes to customers/orders.
        // Overrides for Customer are called during the execution of the
        // following:
        db.SubmitChanges();

See also

Customizing Operations by Using Stored Procedures Exclusively

Access to data by using only stored procedures is a common scenario.

Example

Description

You can modify the example provided in Customizing Operations By Using Stored Procedures by replacing even the first query (which causes dynamic SQL execution) with a method call that wraps a stored procedure.

Assume CustomersByCity is the method, as in the following example.

Code

C#
[Function()]
public IEnumerable<Customer> CustomersByCity(
    [Parameter(Name = "City", DbType = "NVarChar(15)")] 
    string city)
{
    IExecuteResult result = this.ExecuteMethodCall(this,
        ((MethodInfo)(MethodInfo.GetCurrentMethod())),
        city);
    return ((IEnumerable<Customer>)(result.ReturnValue));
}

The following code executes without any dynamic SQL.

C#
NorthwindThroughSprocs db = new NorthwindThroughSprocs("...");
// Use a method call (stored procedure wrapper) instead of
// a LINQ query against the database.
var custQuery =
    db.CustomersByCity("London");

foreach (Customer custObj in custQuery)
{
    // Deferred loading of custObj.Orders uses the override
    // LoadOrders. There is no dynamic SQL.
    foreach (Order ord in custObj.Orders)
    {
        // Make some changes to customers/orders.
        // Overrides for Customer are called during the execution
        // of the following.
    }
}
db.SubmitChanges();

See also

Transaction Support

LINQ to SQL supports three distinct transaction models. The following lists these models in the order of checks performed.

Explicit Local Transaction

When SubmitChanges is called, if the Transaction property is set to a (IDbTransaction) transaction, the SubmitChanges call is executed in the context of the same transaction.

It is your responsibility to commit or rollback the transaction after successful execution of the transaction. The connection corresponding to the transaction must match the connection used for constructing the DataContext. An exception is thrown if a different connection is used.

Explicit Distributable Transaction

You can call LINQ to SQL APIs (including but not limited to SubmitChanges) in the scope of an active Transaction. LINQ to SQL detects that the call is in the scope of a transaction and does not create a new transaction. LINQ to SQL also avoids closing the connection in this case. You can perform query and SubmitChanges executions in the context of such a transaction.

Implicit Transaction

When you call SubmitChanges, LINQ to SQL checks to see whether the call is in the scope of a Transaction or if the Transaction property (IDbTransaction) is set to a user-started local transaction. If it finds neither transaction, LINQ to SQL starts a local transaction (IDbTransaction) and uses it to execute the generated SQL commands. When all SQL commands have been successfully completed, LINQ to SQL commits the local transaction and returns.

See also

SQL-CLR Type Mismatches

LINQ to SQL automates much of the translation between the object model and SQL Server. Nevertheless, some situations prevent exact translation. These key mismatches between the common language runtime (CLR) types and the SQL Server database types are summarized in the following sections. You can find more details about specific type mappings and function translation at SQL-CLR Type Mapping and Data Types and Functions.

Data Types

Translation between the CLR and SQL Server occurs when a query is being sent to the database, and when the results are sent back to your object model. For example, the following Transact-SQL query requires two value conversions:

SQL
Select DateOfBirth From Customer Where CustomerId = @id

Before the query can be executed on SQL Server, the value for the Transact-SQL parameter must be specified. In this example, the id parameter value must first be translated from a CLR System.Int32 type to a SQL Server INT type so that the database can understand what the value is. Then to retrieve the results, the SQL Server DateOfBirth column must be translated from a SQL Server DATETIME type to a CLR System.DateTime type for use in the object model. In this example, the types in the CLR object model and SQL Server database have natural mappings. But, this is not always the case.

Missing Counterparts

The following types do not have reasonable counterparts.

  • Mismatches in the CLR System namespace:

    • Unsigned integers. These types are typically mapped to their signed counterparts of larger size to avoid overflow. Literals can be converted to a signed numeric of the same or smaller size, based on value.

    • Boolean. These types can be mapped to a bit or larger numeric or string. A literal can be mapped to an expression that evaluates to the same value (for example, 1=1 in SQL for True in CLS).

    • TimeSpan. This type represents the difference between two DateTime values and does not correspond to the timestamp of SQL Server. The CLR System.TimeSpan may also map to the SQL Server TIME type in some cases. The SQL Server TIME type was only intended to represent positive values less than 24 hours. The CLR TimeSpan has a much larger range.

    Note

    SQL Server-specific .NET Framework types in System.Data.SqlTypes are not included in this comparison.

  • Mismatches in SQL Server:

    • Fixed length character types. Transact-SQL distinguishes between Unicode and non-Unicode categories and has three distinct types in each category: fixed length nchar/char, variable length nvarchar/varchar, and larger-sized ntext/text. The fixed length character types could be mapped to the CLR System.Char type for retrieving characters, but they do not really correspond to the same type in conversions and behavior.

    • Bit. Although the bit domain has the same number of values as Nullable<Boolean>, the two are different types. Bit takes values 1 and 0 instead of true/false, and cannot be used as an equivalent to Boolean expressions.

    • Timestamp. Unlike the CLR System.TimeSpan type, the SQL Server TIMESTAMP type represents an 8-byte number generated by the database that is unique for each update and is not based on the difference between DateTime values.

    • Money and SmallMoney. These types can be mapped to Decimal but are basically different types and are treated as such by server-based functions and conversions.

Multiple Mappings

There are many SQL Server data types that you can map to one or more CLR data types. There are also many CLR types that you can map to one or more SQL Server types. Although a mapping may be supported by LINQ to SQL, it does not mean that the two types mapped between the CLR and SQL Server are a perfect match in precision, range, and semantics. Some mappings may include differences in any or all of these dimensions. You can find details about these potential differences for the various mapping possibilities at SQL-CLR Type Mapping.

User-defined Types

User-defined CLR types are designed to help bridge the type system gap. Nevertheless they surface interesting issues about type versioning. A change in the version on the client might not be matched by a change in the type stored on the database server. Any such change causes another type mismatch where the type semantics might not match and the version gap is likely to become visible. Further complications occur as inheritance hierarchies are refactored in successive versions.

Expression Semantics

In addition to the pairwise mismatch between CLR and database types, expressions add complexity to the mismatch. Mismatches in operator semantics, function semantics, implicit type conversion, and precedence rules must be considered.

The following subsections illustrate the mismatch between apparently similar expressions. It might be possible to generate SQL expressions that are semantically equivalent to a given CLR expression. However, it is not clear whether the semantic differences between apparently similar expressions are evident to a CLR user, and therefore whether the changes that are required for semantic equivalence are intended or not. This is an especially critical issue when an expression is evaluated for a set of values. The visibility of the difference might depend on data- and be hard to identify during coding and debugging.

Null Semantics

SQL expressions provide three-valued logic for Boolean expressions. The result can be true, false, or null. By contrast, CLR specifies two-valued Boolean result for comparisons involving null values. Consider the following code:

C#
Nullable<int> i = null;
Nullable<int> j = null;
if (i == j)
{
    // This branch is executed.
}
SQL
-- Assume col1 and col2 are integer columns with null values.
-- Assume that ANSI null behavior has not been explicitly
--  turned off.
Select …
From …
Where col1 = col2
-- Evaluates to null, not true and the corresponding row is not
--   selected.
-- To obtain matching behavior (i -> col1, j -> col2) change
--   the query to the following:
Select …
From …
Where
    col1 = col2
or (col1 is null and col2 is null)
-- (Visual Basic 'Nothing'.)

A similar problem occurs with the assumption about two-valued results.

C#
if ((i == j) || (i != j)) // Redundant condition.
{
    // ...
}
SQL
-- Assume col1 and col2 are nullable columns.
-- Assume that ANSI null behavior has not been explicitly
--   turned off.
Select …
From …
Where
    col1 = col2
or col1 != col2
-- Visual Basic: col1 <> col2.

-- Excludes the case where the boolean expression evaluates
--   to null. Therefore the where clause does not always
--   evaluate to true.

In the previous case, you can get equivalent behavior in generating SQL, but the translation might not accurately reflect your intention.

LINQ to SQL does not impose C# null or Visual Basic nothing comparison semantics on SQL. Comparison operators are syntactically translated to their SQL equivalents. The semantics reflect SQL semantics as defined by server or connection settings. Two null values are considered unequal under default SQL Server settings (although you can change the settings to change the semantics). Regardless, LINQ to SQL does not consider server settings in query translation.

A comparison with the literal null (nothing) is translated to the appropriate SQL version (is null or is not null).

The value of null (nothing) in collation is defined by SQL Server; LINQ to SQL does not change the collation.

Type Conversion and Promotion

SQL supports a rich set of implicit conversions in expressions. Similar expressions in C# would require an explicit cast. For example:

  • Nvarchar and DateTime types can be compared in SQL without any explicit casts; C# requires explicit conversion.

  • Decimal is implicitly converted to DateTime in SQL. C# does not allow for an implicit conversion.

Likewise, type precedence in Transact-SQL differs from type precedence in C# because the underlying set of types is different. In fact, there is no clear subset/superset relationship between the precedence lists. For example, comparing an nvarchar with a varchar causes the implicit conversion of the varchar expression to nvarchar. The CLR provides no equivalent promotion.

In simple cases, these differences cause CLR expressions with casts to be redundant for a corresponding SQL expression. More importantly, the intermediate results of a SQL expression might be implicitly promoted to a type that has no accurate counterpart in C#, and vice versa. Overall, the testing, debugging, and validation of such expressions adds significant burden on the user.

Collation

Transact-SQL supports explicit collations as annotations to character string types. These collations determine the validity of certain comparisons. For example, comparing two columns with different explicit collations is an error. The use of much simplified CTS string type does not cause such errors. Consider the following example:

SQL
create table T2 (
    Col1 nvarchar(10),
    Col2      nvarchar(10) collate Latin_general_ci_as
)
C#
class C
{
string s1;       // Map to T2.Col1.
string s2;       // Map to T2.Col2.

    void Compare()
    {
        if (s1 == s2) // This is correct.
        {
            // ...
        }
    }
}
SQL
Select …
From …
Where Col1 = Col2
-- Error, collation conflict.

In effect, the collation subclause creates a restricted type that is not substitutable.

Similarly, the sort order can be significantly different across the type systems. This difference affects the sorting of results. Guid is sorted on all 16 bytes by lexicographic order (IComparable()), whereas T-SQL compares GUIDs in the following order: node(10-15), clock-seq(8-9), time-high(6-7), time-mid(4-5), time-low(0-3). This ordering was done in SQL 7.0 when NT-generated GUIDs had such an octet order. The approach ensured that GUIDs generated at the same node cluster came together in sequential order according to timestamp. The approach was also useful for building indexes (inserts become appends instead of random IOs). The order was scrambled later in Windows because of privacy concerns, but SQL must maintain compatibility. A workaround is to use SqlGuid instead of Guid.

Operator and Function Differences

Operators and functions that are essentially comparable have subtly different semantics. For example:

  • C# specifies short circuit semantics based on lexical order of operands for logical operators && and ||. SQL on the other hand is targeted for set-based queries and therefore provides more freedom for the optimizer to decide the order of execution. Some of the implications include the following:

    • Semantically equivalent translation would require "CASE … WHEN … THEN" construct in SQL to avoid reordering of operand execution.

    • A loose translation to AND/OR operators could cause unexpected errors if the C# expression relies on evaluation the second operand being based on the result of the evaluation of the first operand.

  • Round() function has different semantics in .NET Framework and in T-SQL.

  • Starting index for strings is 0 in the CLR but 1 in SQL. Therefore, any function that has index needs index translation.

  • The CLR supports modulus (‘%’) operator for floating point numbers but SQL does not.

  • The Like operator effectively acquires automatic overloads based on implicit conversions. Although the Like operator is defined to operate on character string types, implicit conversion from numeric types or DateTime types allows for those non-string types to be used with Like just as well. In CTS, comparable implicit conversions do not exist. Therefore, additional overloads are needed.

    Note

    This Like operator behavior applies to C# only; the Visual Basic Like keyword is unchanged.

  • Overflow is always checked in SQL but it has to be explicitly specified in C# (not in Visual Basic) to avoid wraparound. Given integer columns C1, C2 and C3, if C1+C2 is stored in C3 (Update T Set C3 = C1 + C2).

    SQL
  • create table T3 (
        Col1      integer,
        Col2      integer
    )
    insert into T3 (col1, col2) values (2147483647, 5)
    -- Valid values: max integer value and 5.
    select * from T3 where col1 + col2 < 0
    -- Produces arithmetic overflow error.
    
C#
// C# overflow in absence of explicit checks.
int i = Int32.MaxValue;
int j = 5;
if (i+j < 0) Console.WriteLine("Overflow!");
// This code prints the overflow message.
  • SQL performs symmetric arithmetic rounding while .NET Framework uses banker’s rounding. See Knowledgebase article 196652 for additional details.

  • By default, for common locales, character-string comparisons are case-insensitive in SQL. In Visual Basic and in C#, they are case-sensitive. For example, s == "Food" (s = "Food" in Visual Basic) and s == "Food" can yield different results if s is food.

    SQL
  • -- Assume default US-English locale (case insensitive).
    create table T4 (
        Col1      nvarchar (256)
    )
    insert into T4 values (‘Food’)
    insert into T4 values (‘FOOD’)
    select * from T4 where Col1 = ‘food’
    -- Both the rows are returned because of case-insensitive matching.
    
C#
// C# equivalent on collections of Strings in place of nvarchars.
String[] strings = { "food", "FOOD" };
foreach (String s in strings)
{
    if (s == "food")
    {
        Console.WriteLine(s);
    }
}
// Only "food" is returned.
  • Operators/ functions applied to fixed length character type arguments in SQL have significantly different semantics than the same operators/functions applied to the CLR System.String. This could also be viewed as an extension of the missing counterpart problem discussed in the section about types.

    SQL
create table T4 (
    Col1      nchar(4)
)
Insert into T5(Col1) values ('21');
Insert into T5(Col1) values ('1021');
Select * from T5 where Col1 like '%1'
-- Only the second row with Col1 = '1021' is returned.
-- Not the first row!
C#
// Assume Like(String, String) method.
string s = ""; // map to T4.Col1
if (System.Data.Linq.SqlClient.SqlMethods.Like(s, "%1"))
{
    Console.WriteLine(s);
}
// Expected to return true for both "21" and "1021"

A similar problem occurs with string concatenation.

SQL
  • create table T6 (
        Col1      nchar(4)
        Col2       nchar(4)
    )
    Insert into T6 values ('a', 'b');
    Select Col1+Col2 from T6
    -- Returns concatenation of padded strings "a   b   " and not "ab".
    

In summary, a convoluted translation might be required for CLR expressions and additional operators/functions may be necessary to expose SQL functionality.

Type Casting

In C# and in SQL, users can override the default semantics of expressions by using explicit type casts (Cast and Convert). However, exposing this capability across the type system boundary poses a dilemma. A SQL cast that provides the desired semantics cannot be easily translated to a corresponding C# cast. On the other hand, a C# cast cannot be directly translated into an equivalent SQL cast because of type mismatches, missing counterparts, and different type precedence hierarchies. There is a trade-off between exposing the type system mismatch and losing significant power of expression.

In other cases, type casting might not be needed in either domain for validation of an expression but might be required to make sure that a non-default mapping is correctly applied to the expression.

SQL
-- Example from "Non-default Mapping" section extended
create table T5 (
    Col1      nvarchar(10),
    Col2      nvarchar(10)
)
Insert into T5(col1, col2) values (‘3’, ‘2’);
C#
class C
{
    int x;        // Map to T5.Col1.
    int y;        // Map to T5.Col2.

    void Casting()
    {
        // Intended predicate.
        if (x + y > 4)
        {
            // valid for the data above
        }
    }
}
SQL
Select *
From T5
Where Col1 + Col2 > 4
-- "Col1 + Col2" expr evaluates to '32'

Performance Issues

Accounting for some SQL Server-CLR type differences may result in a decrease in performance when crossing between the CLR and SQL Server type systems. Examples of scenarios impacting performance include the following:

  • Forced order of evaluation for logical and/or operators

  • Generating SQL to enforce order of predicate evaluation restricts the SQL optimizer’s ability.

  • Type conversions, whether introduced by a CLR compiler or by an Object-Relational query implementation, may curtail index usage.

    For example,

    SQL
-- Table DDL
create table T5 (
    Col1      varchar(100)
)
C#
class C5
{
    string s;        // Map to T5.Col1.
}

Consider the translation of expression (s = SOME_STRING_CONSTANT).

SQL
  • -- Corresponding part of SQL where clause
    Where …
    Col1 = SOME_STRING_CONSTANT
    -- This expression is of the form <varchar> = <nvarchar>.
    -- Hence SQL introduces a conversion from varchar to nvarchar,
    --   resulting in
    Where …
    Convert(nvarchar(100), Col1) = SOME_STRING_CONSTANT
    -- Cannot use the index for column Col1 for some implementations.
    

In addition to semantic differences, it is important to consider impacts to performance when crossing between the SQL Server and CLR type systems. For large data sets, such performance issues can determine whether an application is deployable.

See also

SQL-CLR Custom Type Mappings

Type mapping between SQL Server and the common language runtime (CLR) is automatically specified when you use the SQLMetal command-line tool, Object Relational Designer (O/R Designer).

When no customized mapping is performed, these tools assign default type mappings as described in SQL-CLR Type Mapping. If you want to type mappings differently from these defaults, you need to do some customization of the type mappings.

When customizing type mappings, the recommended approach is to make the changes in an intermediary DBML file. Then, your customized DBML file should be used when you create you code and mapping files with SQLMetal or O/R Designer.

Once you instantiate the DataContext object from the code and mapping files, the DataContext.CreateDatabase method creates a database based on the type mappings that are specified. If there are no CLR type attributes specified in the mappings, the default type mappings will be used.

Customization with SQLMetal or O/R Designer

With SQLMetal and O/R Designer, you can automatically create an object model that includes the type mapping information inside or outside the code file. Because these files are overwritten by SQLMetal or O/R Designer, each time you recreate your mappings, the recommended approach to specifying custom type mappings is to customize a DBML file.

To customize type mappings with SQLMetal or O/R Designer, first generate a DBML file. Then, before generating the code file or mapping file, modify the DBML file to identify the desired type mappings. With SQLMetal, you have to manually change the Type and DbType attributes in the DBML file to make your type mapping customizations. With O/R Designer, you can make your changes within the Designer. For more information about using the O/R Designer, see LINQ to SQL Tools in Visual Studio.

Note

Some type mappings may result in overflow or data loss exceptions while translating to or from the database. Carefully review the Type Mapping Run-time Behavior Matrix in SQL-CLR Type Mapping before making any customizations.

In order for your type mapping customizations to be recognized by SQLMetal or O/R Designer, you need to make sure that these tools are supplied with the path to your custom DBML file when you generate your code file or external mapping file. Although not required for type mapping customization, it is recommended that you always separate your type mapping information from your code file and generate the additional external type mapping file. Doing so will leave some flexibility by not requiring that the code file be recompiled.

Incorporating Database Changes

When your database changes, you will need to update your DBML file to reflect those changes. One way to do this is to automatically create a new DBML file and then re-do your type mapping customizations. Alternatively, you could compare the differences between your new DBML file and your customized DBML file and update your custom DBML file manually to reflect the database change.

See also

User-Defined Functions

LINQ to SQL uses methods in your object model to represent user-defined functions. You designate methods as functions by applying the FunctionAttribute attribute and, where required, the ParameterAttribute attribute. For more information, see The LINQ to SQL Object Model.

To avoid an InvalidOperationException, user-defined functions in LINQ to SQL must be in one of the following forms:

  • A function wrapped as a method call having the correct mapping attributes. For more information, see Attribute-Based Mapping.

  • A static SQL method specific to LINQ to SQL.

  • A function supported by a .NET Framework method.

The topics in this section show how to form and call these methods in your application if you write the code yourself. Developers using Visual Studio would typically use the Object Relational Designer to map user-defined functions.

In This Section

How to: Use Scalar-Valued User-Defined Functions
Describes how to implement a function that returns scalar values.

How to: Use Table-Valued User-Defined Functions
Describes how to implement a function that returns table values.

How to: Call User-Defined Functions Inline
Describes how to make inline calls to functions and the differences in execution when the call is made inline.

How to: Use Scalar-Valued User-Defined Functions

You can map a client method defined on a class to a user-defined function by using the FunctionAttribute attribute. Note that the body of the method constructs an expression that captures the intent of the method call, and passes that expression to the DataContext for translation and execution.

Note

Direct execution occurs only if the function is called outside a query. For more information, see How to: Call User-Defined Functions Inline.

Example

The following SQL code presents a scalar-valued user-defined function ReverseCustName().

CREATE FUNCTION ReverseCustName(@string varchar(100))  
RETURNS varchar(100)  
AS  
BEGIN  
    DECLARE @custName varchar(100)  
    -- Implementation left as exercise for users.  
    RETURN @custName  
END  

You would map a client method such as the following for this code:

C#
[Function(Name = "dbo.ReverseCustName", IsComposable = true)]
[return: Parameter(DbType = "VarChar(100)")]
public string ReverseCustName([Parameter(Name = "string",
    DbType = "VarChar(100)")] string @string)
{
    return ((string)(this.ExecuteMethodCall(this,
        ((MethodInfo)(MethodInfo.GetCurrentMethod())),
        @string).ReturnValue));
}

See also

How to: Use Table-Valued User-Defined Functions

A table-valued function returns a single rowset (unlike stored procedures, which can return multiple result shapes). Because the return type of a table-valued function is Table, you can use a table-valued function anywhere in SQL that you can use a table. You can also treat the table-valued function just as you would a table.

Example

The following SQL function explicitly states that it returns a TABLE. Therefore, the returned rowset structure is implicitly defined.

CREATE FUNCTION ProductsCostingMoreThan(@cost money)  
RETURNS TABLE  
AS  
RETURN  
    SELECT ProductID, UnitPrice  
    FROM Products  
    WHERE UnitPrice > @cost  

LINQ to SQL maps the function as follows:

C#
[Function(Name="dbo.ProductsCostingMoreThan", IsComposable=true)]
public IQueryable<ProductsCostingMoreThanResult> ProductsCostingMoreThan([Parameter(DbType="Money")] System.Nullable<decimal> cost)
{
    return this.CreateMethodCallQuery<ProductsCostingMoreThanResult>(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), cost);
}

Example

The following SQL code shows that you can join to the table that the function returns and otherwise treat it as you would any other table:

SELECT p2.ProductName, p1.UnitPrice  
FROM dbo.ProductsCostingMoreThan(80.50)  
AS p1 INNER JOIN Products AS p2 ON p1.ProductID = p2.ProductID  

In LINQ to SQL, the query would be rendered as follows:

C#
        var q =
from p in db.ProductsCostingMoreThan(80.50m)
join s in db.Products on p.ProductID equals s.ProductID
select new { p.ProductID, s.UnitPrice };

See also

How to: Call User-Defined Functions Inline

Although you can call user-defined functions inline, functions that are included in a query whose execution is deferred are not executed until the query is executed. For more information, see Introduction to LINQ Queries (C#).

When you call the same function outside a query, LINQ to SQL creates a simple query from the method call expression. The following is the SQL syntax (the parameter @p0 is bound to the constant passed in):

SELECT dbo.ReverseCustName(@p0)  

LINQ to SQL creates the following:

C#
string str = db.ReverseCustName("LINQ to SQL");

Example

In the following LINQ to SQL query, you can see an inline call to the generated user-defined function method ReverseCustName. The function is not executed immediately because query execution is deferred. The SQL built for this query translates to a call to the user-defined function in the database (see the SQL code following the query).

C#
var custQuery =
    from cust in db.Customers
    select new {cust.ContactName, Title = 
        db.ReverseCustName(cust.ContactTitle)};
SELECT [t0].[ContactName],  
    dbo.ReverseCustName([t0].[ContactTitle]) AS [Title]  
FROM [Customers] AS [t0]  

See also

Reference

This section provides reference information for LINQ to SQL developers.

You are also encouraged to search Microsoft Docs for specific issues, and especially to participate in the LINQ Forum, where you can discuss more complex topics in detail with experts. In addition, you can study a white paper detailing LINQ to SQL technology, complete with Visual Basic and C# code examples. For more information, see LINQ to SQL: .NET Language-Integrated Query for Relational Data.

In This Section

Data Types and Functions
Describes how common language runtime (CLR) constructs have corresponding expressions in SQL only where LINQ to SQL has explicitly provided a conversion in the translation engine.

Attribute-Based Mapping
Describes the LINQ to SQL attribute-based approach to mapping a LINQ to SQL object model to a SQL Server database.

Code Generation in LINQ to SQL
Describes how LINQ to SQL obtains meta information from a database and then generates code files.

External Mapping
Describes the LINQ to SQL external-mapping approach to mapping a LINQ to SQL object model to a SQL Server database. Provides the XSD schema definition for mapping files.

Frequently Asked Questions
Provides answers to common questions regarding LINQ to SQL.

SQL Server Compact and LINQ to SQL
Describes how SQL Server Compact 3.5 differs from SQL Server in LINQ to SQL applications.

Standard Query Operator Translation
Describes how LINQ to SQL translates Standard Query Operators to SQL commands.

Related Sections

LINQ to SQL
Provides a portal for LINQ to SQL topics.

Language-Integrated Query (LINQ) - C#
Language-Integrated Query (LINQ) - Visual Basic
Provides portals for LINQ topics.

LinqDataSource Web Server Control Overview
Describes how the LinqDataSource control exposes LINQ to Web developers through the ASP.NET data-source control architecture.

Reference

This section provides reference information for LINQ to SQL developers.

You are also encouraged to search Microsoft Docs for specific issues, and especially to participate in the LINQ Forum, where you can discuss more complex topics in detail with experts. In addition, you can study a white paper detailing LINQ to SQL technology, complete with Visual Basic and C# code examples. For more information, see LINQ to SQL: .NET Language-Integrated Query for Relational Data.

In This Section

Data Types and Functions
Describes how common language runtime (CLR) constructs have corresponding expressions in SQL only where LINQ to SQL has explicitly provided a conversion in the translation engine.

Attribute-Based Mapping
Describes the LINQ to SQL attribute-based approach to mapping a LINQ to SQL object model to a SQL Server database.

Code Generation in LINQ to SQL
Describes how LINQ to SQL obtains meta information from a database and then generates code files.

External Mapping
Describes the LINQ to SQL external-mapping approach to mapping a LINQ to SQL object model to a SQL Server database. Provides the XSD schema definition for mapping files.

Frequently Asked Questions
Provides answers to common questions regarding LINQ to SQL.

SQL Server Compact and LINQ to SQL
Describes how SQL Server Compact 3.5 differs from SQL Server in LINQ to SQL applications.

Standard Query Operator Translation
Describes how LINQ to SQL translates Standard Query Operators to SQL commands.

Related Sections

LINQ to SQL
Provides a portal for LINQ to SQL topics.

Language-Integrated Query (LINQ) - C#
Language-Integrated Query (LINQ) - Visual Basic
Provides portals for LINQ topics.

LinqDataSource Web Server Control Overview
Describes how the LinqDataSource control exposes LINQ to Web developers through the ASP.NET data-source control architecture.

Data Types and Functions

The topics listed in the following table describe LINQ to SQL support for members, constructs, and casts of the common language runtime (CLR). Supported members and constructs are available to use in your LINQ to SQL queries.

An unsupported item in the table means that LINQ to SQL cannot translate the CLR member, construct, or cast for execution on the SQL Server. You may still be able to use them in your code, but they must be evaluated before the query is translated to Transact-SQL or after the results have been retrieved from the database.

Topic Description
SQL-CLR Type Mapping Provides a detailed matrix of mappings between CLR types and SQL Server types.
Basic Data Types Summarizes differences in behavior from the .NET Framework.
Boolean Data Types Summarizes differences in behavior from the .NET Framework.
Null Semantics Provides links to LINQ to SQL topics that discuss null and nullable issues.
Numeric and Comparison Operators Summarizes differences in behavior from the .NET Framework.
Sequence Operators Summarizes differences in behavior from the .NET Framework.
System.Convert Methods Summarizes differences in behavior from the .NET Framework.
System.DateTime Methods Describes LINQ to SQL support for members of the System.DateTime structure.
System.DateTimeOffset Methods Describes LINQ to SQL support for members of the System.DateTimeOffset structure.
System.Math Methods Summarizes differences in behavior from the .NET Framework.
System.Object Methods Summarizes differences in behavior from the .NET Framework.
System.String Methods Summarizes differences in behavior from the .NET Framework.
System.TimeSpan Methods Describes LINQ to SQL support for members of the System.TimeSpan structure.
Unsupported Functionality Describes functionality that is not supported in LINQ to SQL.

See also

SQL-CLR Type Mapping

In LINQ to SQL, the data model of a relational database maps to an object model that is expressed in the programming language of your choice. When the application runs, LINQ to SQL translates the language-integrated queries in the object model into SQL and sends them to the database for execution. When the database returns the results, LINQ to SQL translates the results back to objects that you can work with in your own programming language.

In order to translate data between the object model and the database, a type mapping must be defined. LINQ to SQL uses a type mapping to match each common language runtime (CLR) type with a particular SQL Server type. You can define type mappings and other mapping information, such as database structure and table relationships, inside the object model with attribute-based mapping. Alternatively, you can specify the mapping information outside the object model with an external mapping file. For more information, see Attribute-Based Mapping and External Mapping.

This topic discusses the following points:

Default Type Mapping

You can create the object model or external mapping file automatically with the Object Relational Designer (O/R Designer) or the SQLMetal command-line tool. The default type mappings for these tools define which CLR types are chosen to map to columns inside the SQL Server database. For more information about using these tools, see Creating the Object Model.

You can also use the CreateDatabase method to create a SQL Server database based on the mapping information in the object model or external mapping file. The default type mappings for the CreateDatabase method define which type of SQL Server columns are created to map to the CLR types in the object model. For more information, see How to: Dynamically Create a Database.

Type Mapping Run-time Behavior Matrix

The following diagram shows the expected run-time behavior of specific type mappings when data is retrieved from or saved to the database. With the exception of serialization, LINQ to SQL does not support mapping between any CLR or SQL Server data types that are not specified in this matrix. For more information on serialization support, see Binary Serialization.

SQL Server to SQL CLR data type mapping table

Note

Some type mappings may result in overflow or data loss exceptions while translating to or from the database.

Custom Type Mapping

With LINQ to SQL, you are not limited to the default type mappings used by the O/R Designer, SQLMetal, and the CreateDatabase method. You can create custom type mappings by explicitly specifying them in a DBML file. Then you can use that DBML file to create the object model code and mapping file. For more information, see SQL-CLR Custom Type Mappings.

Behavior Differences Between CLR and SQL Execution

Because of differences in precision and execution between the CLR and SQL Server, you may receive different results or experience different behavior depending on where you perform your calculations. Calculations performed in LINQ to SQL queries are actually translated to Transact-SQL and then executed on the SQL Server database. Calculations performed outside LINQ to SQL queries are executed within the context of the CLR.

For example, the following are some differences in behavior between the CLR and SQL Server:

  • SQL Server orders some data types differently than data of equivalent type in the CLR. For example, SQL Server data of type UNIQUEIDENTIFIER is ordered differently than CLR data of type System.Guid.

  • SQL Server handles some string comparison operations differently than the CLR. In SQL Server, string comparison behavior depends on the collation settings on the server. For more information, see Working with Collations in the Microsoft SQL Server Books Online.

  • SQL Server may return different values for some mapped functions than the CLR. For example, equality functions will differ because SQL Server considers two strings to be equal if they only differ in trailing white space; whereas the CLR considers them to be not equal.

Enum Mapping

LINQ to SQL supports mapping the CLR System.Enum type to SQL Server types in two ways:

  • Mapping to SQL numeric types (TINYINT, SMALLINT, INT, BIGINT)

    When you map a CLR System.Enum type to a SQL numeric type, you map the underlying integer value of the CLR System.Enum to the value of the SQL Server database column. For example, if a System.Enum named DaysOfWeek contains a member named Tue with an underlying integer value of 3, that member maps to a database value of 3.

  • Mapping to SQL text types (CHAR, NCHAR, VARCHAR, NVARCHAR)

    When you map a CLR System.Enum type to a SQL text type, the SQL database value is mapped to the names of the CLR System.Enum members. For example, if a System.Enum named DaysOfWeek contains a member named Tue with an underlying integer value of 3, that member maps to a database value of Tue.

Note

When mapping SQL text types to a CLR System.Enum, include only the names of the Enum members in the mapped SQL column. Other values are not supported in the Enum-mapped SQL column.

The O/R Designer and SQLMetal command-line tool cannot automatically map a SQL type to a CLR Enum class. You must explicitly configure this mapping by customizing a DBML file for use by the O/R Designer and SQLMetal. For more information about custom type mapping, see SQL-CLR Custom Type Mappings.

Because a SQL column intended for enumeration will be of the same type as other numeric and text columns; these tools will not recognize your intent and default to mapping as described in the following Numeric Mapping and Text and XML Mapping sections. For more information about generating code with the DBML file, see Code Generation in LINQ to SQL.

The DataContext.CreateDatabase method creates a SQL column of numeric type to map a CLR System.Enum type.

Numeric Mapping

LINQ to SQL lets you map many CLR and SQL Server numeric types. The following table shows the CLR types that O/R Designer and SQLMetal select when building an object model or external mapping file based on your database.

SQL Server Type Default CLR Type mapping used by O/R Designer and SQLMetal
BIT System.Boolean
TINYINT System.Int16
INT System.Int32
BIGINT System.Int64
SMALLMONEY System.Decimal
MONEY System.Decimal
DECIMAL System.Decimal
NUMERIC System.Decimal
REAL/FLOAT(24) System.Single
FLOAT/FLOAT(53) System.Double

The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.

CLR Type Default SQL Server Type used by DataContext.CreateDatabase
System.Boolean BIT
System.Byte TINYINT
System.Int16 SMALLINT
System.Int32 INT
System.Int64 BIGINT
System.SByte SMALLINT
System.UInt16 INT
System.UInt32 BIGINT
System.UInt64 DECIMAL(20)
System.Decimal DECIMAL(29,4)
System.Single REAL
System.Double FLOAT

There are many other numeric mappings you can choose, but some may result in overflow or data loss exceptions while translating to or from the database. For more information, see the Type Mapping Run Time Behavior Matrix.

Decimal and Money Types

The default precision of SQL Server DECIMAL type (18 decimal digits to the left and right of the decimal point) is much smaller than the precision of the CLR System.Decimal type that it is paired with by default. This can result in precision loss when you save data to the database. However, just the opposite can happen if the SQL Server DECIMAL type is configured with greater than 29 digits of precision. When a SQL Server DECIMAL type has been configured with a greater precision than the CLR System.Decimal, precision loss can occur when retrieving data from the database.

The SQL Server MONEY and SMALLMONEY types, which are also paired with the CLR System.Decimal type by default, have a much smaller precision, which can result in overflow or data loss exceptions when saving data to the database.

Text and XML Mapping

There are also many text-based and XML types that you can map with LINQ to SQL. The following table shows the CLR types that O/R Designer and SQLMetal select when building an object model or external mapping file based on your database.

SQL Server Type Default CLR Type mapping used by O/R Designer and SQLMetal
CHAR System.String
NCHAR System.String
VARCHAR System.String
NVARCHAR System.String
TEXT System.String
NTEXT System.String
XML System.Xml.Linq.XElement

The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.

CLR Type Default SQL Server Type used by DataContext.CreateDatabase
System.Char NCHAR(1)
System.String NVARCHAR(4000)
System.Char[] NVARCHAR(4000)
Custom type implementing Parse() and ToString() NVARCHAR(MAX)

There are many other text-based and XML mappings you can choose, but some may result in overflow or data loss exceptions while translating to or from the database. For more information, see the Type Mapping Run Time Behavior Matrix.

XML Types

The SQL Server XML data type is available starting in Microsoft SQL Server 2005. You can map the SQL Server XML data type to XElement, XDocument, or String. If the column stores XML fragments that cannot be read into XElement, the column must be mapped to String to avoid run-time errors. XML fragments that must be mapped to String include the following:

  • A sequence of XML elements

  • Attributes

  • Public Identifiers (PI)

  • Comments

Although you can map XElement and XDocument to SQL Server as shown in the Type Mapping Run Time Behavior Matrix, the DataContext.CreateDatabase method has no default SQL Server type mapping for these types.

Custom Types

If a class implements Parse() and ToString(), you can map the object to any SQL text type (CHAR, NCHAR, VARCHAR, NVARCHAR, TEXT, NTEXT, XML). The object is stored in the database by sending the value returned by ToString() to the mapped database column. The object is reconstructed by invoking Parse() on the string returned by the database.

Note

LINQ to SQL does not support serialization by using System.Xml.Serialization.IXmlSerializable.

Date and Time Mapping

With LINQ to SQL, you can map many SQL Server date and time types. The following table shows the CLR types that O/R Designer and SQLMetal select when building an object model or external mapping file based on your database.

SQL Server Type Default CLR Type mapping used by O/R Designer and SQLMetal
SMALLDATETIME System.DateTime
DATETIME System.DateTime
DATETIME2 System.DateTime
DATETIMEOFFSET System.DateTimeOffset
DATE System.DateTime
TIME System.TimeSpan

The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.

CLR Type Default SQL Server Type used by DataContext.CreateDatabase
System.DateTime DATETIME
System.DateTimeOffset DATETIMEOFFSET
System.TimeSpan TIME

There are many other date and time mappings you can choose, but some may result in overflow or data loss exceptions while translating to or from the database. For more information, see the Type Mapping Run Time Behavior Matrix.

Note

The SQL Server types DATETIME2, DATETIMEOFFSET, DATE, and TIME are available starting with Microsoft SQL Server 2008. LINQ to SQL supports mapping to these new types starting with the .NET Framework version 3.5 SP1.

System.Datetime

The range and precision of the CLR System.DateTime type is greater than the range and precision of the SQL Server DATETIME type, which is the default type mapping for the DataContext.CreateDatabase method. To help avoid exceptions related to dates outside the range of DATETIME, use DATETIME2, which is available starting with Microsoft SQL Server 2008. DATETIME2 can match the range and precision of the CLR System.DateTime.

SQL Server dates have no concept of TimeZone, a feature that is richly supported in the CLR. TimeZone values are saved as is to the database without TimeZone conversion, regardless of the original DateTimeKind information. When DateTime values are retrieved from the database, their value is loaded as is into a DateTime with a DateTimeKind of Unspecified. For more information about supported System.DateTime methods, see System.DateTime Methods.

System.TimeSpan

Microsoft SQL Server 2008 and the .NET Framework 3.5 SP1 let you map the CLR System.TimeSpan type to the SQL Server TIME type. However, there is a large difference between the range that the CLR System.TimeSpan supports and what the SQL Server TIME type supports. Mapping values less than 0 or greater than 23:59:59.9999999 hours to the SQL TIME will result in overflow exceptions. For more information, see System.TimeSpan Methods.

In Microsoft SQL Server 2000 and SQL Server 2005, you cannot map database fields to TimeSpan. However, operations on TimeSpan are supported because TimeSpan values can be returned from DateTime subtraction or introduced into an expression as a literal or bound variable.

Binary Mapping

There are many SQL Server types that can map to the CLR type System.Data.Linq.Binary. The following table shows the SQL Server types that cause O/R Designer and SQLMetal to define a CLR System.Data.Linq.Binary type when building an object model or external mapping file based on your database.

SQL Server Type Default CLR Type mapping used by O/R Designer and SQLMetal
BINARY(50) System.Data.Linq.Binary
VARBINARY(50) System.Data.Linq.Binary
VARBINARY(MAX) System.Data.Linq.Binary
VARBINARY(MAX) with the FILESTREAM attribute System.Data.Linq.Binary
IMAGE System.Data.Linq.Binary
TIMESTAMP System.Data.Linq.Binary

The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.

CLR Type Default SQL Server Type used by DataContext.CreateDatabase
System.Data.Linq.Binary VARBINARY(MAX)
System.Byte VARBINARY(MAX)
System.Runtime.Serialization.ISerializable VARBINARY(MAX)

There are many other binary mappings you can choose, but some may result in overflow or data loss exceptions while translating to or from the database. For more information, see the Type Mapping Run Time Behavior Matrix.

SQL Server FILESTREAM

The FILESTREAM attribute for VARBINARY(MAX) columns is available starting with Microsoft SQL Server 2008; you can map to it with LINQ to SQL starting with the .NET Framework version 3.5 SP1.

Although you can map VARBINARY(MAX) columns with the FILESTREAM attribute to Binary objects, the DataContext.CreateDatabase method is unable to automatically create columns with the FILESTREAM attribute. For more information about FILESTREAM, see FILESTREAM Overview on Microsoft SQL Server Books Online.

Binary Serialization

If a class implements the ISerializable interface, you can serialize an object to any SQL binary field (BINARY, VARBINARY, IMAGE). The object is serialized and deserialized according to how the ISerializable interface is implemented. For more information, see Binary Serialization.

Miscellaneous Mapping

The following table shows the default type mappings for some miscellaneous types that have not yet been mentioned. The following table shows the CLR types that O/R Designer and SQLMetal select when building an object model or external mapping file based on your database.

SQL Server Type Default CLR Type mapping used by O/R Designer and SQLMetal
UNIQUEIDENTIFIER System.Guid
SQL_VARIANT System.Object

The next table shows the default type mappings used by the DataContext.CreateDatabase method to define which type of SQL columns are created to map to the CLR types defined in your object model or external mapping file.

CLR Type Default SQL Server Type used by DataContext.CreateDatabase
System.Guid UNIQUEIDENTIFIER
System.Object SQL_VARIANT

LINQ to SQL does not support any other type mappings for these miscellaneous types. For more information, see the Type Mapping Run Time Behavior Matrix.

See also

Basic Data Types

Because LINQ to SQL queries translate to Transact-SQL before they are executed on the Microsoft SQL Server. LINQ to SQL supports much of the same built-in functionality that SQL Server does for basic data types.

Casting

Implicit or explicit casts are enabled from a source CLR type to a target CLR type if there is a similar valid conversion within SQL Server. For more information about CLR casting, see CType Function (Visual Basic) and Type-testing and conversion operators. After conversion, casts change the behavior of operations performed on a CLR expression to match the behavior of other CLR expressions that naturally map to the destination type. Casts are also translatable in the context of inheritance mapping. Objects can be cast to more specific entity subtypes so that their subtype-specific data can be accessed.

Equality Operators

LINQ to SQL supports the following equality operators on basic data types inside LINQ to SQL queries:

  • Equal and Inequality Operator: Equality and inequality operators are supported for numeric, Boolean, DateTime, and TimeSpan types. For more about Visual Basic operators = and <>, see Comparison Operators. For more information about C# comparison operators == and !=, see Equality operators.

  • Is operator: The IS operator has a supported translation when inheritance mapping is being used. It can be used instead of directly testing the discriminator column to determine whether an object is of a specific entity type, and is translated to a check on the discriminator column. For more information about the Visual Basic and C# Is operators, see Is Operator and is.

See also

Boolean Data Types

Boolean operators work as expected in the common language runtime (CLR), except that short-circuiting behavior is not translated. For example, the Visual Basic AndAlso operator behaves like the And operator. The C# && operator behaves like the & operator.

LINQ to SQL supports the following operators.

Visual Basic C#
And Operator & Operator
AndAlso Operator && Operator
Or Operator | Operator
OrElse Operator || Operator
Xor Operator ^ Operator
Not Operator ! Operator

See also

Null Semantics

The following table provides links to various parts of the LINQ to SQL documentation where null (Nothing in Visual Basic) issues are discussed.

Topic Description
SQL-CLR Type Mismatches The "Null Semantics" section of this topic includes discussion of the three-state SQL Boolean versus the two-state common language runtime (CLR) Boolean, the literal Nothing (Visual Basic) and null (C#), and other similar issues.
Standard Query Operator Translation The "Null Semantics" section of this topic describes null comparison semantics in LINQ to SQL.
System.String Methods The "Differences from .NET" section of this topic describes how a return of 0 from LastIndexOf might mean either that the string is null or that the found position is 0.
Compute the Sum of Values in a Numeric Sequence Describes how the Sum operator evaluates to null (Nothing in Visual Basic) instead of 0 for a sequence that contains only nulls or for an empty sequence.

See also

Numeric and Comparison Operators

Arithmetic and comparison operators work as expected in the common language runtime (CLR) except as follows:

  • SQL does not support the modulus operator on floating-point numbers.

  • SQL does not support unchecked arithmetic.

  • Increment and decrement operators cause side-effects when you use them in expressions that cannot be replicated in SQL and are, therefore, not supported.

Supported Operators

LINQ to SQL supports the following operators.

  • Basic arithmetic operators:

    • +

    • - (subtraction)

    • *

    • /

    • Visual Basic integer division (\)

    • % (Visual Basic Mod)

    • <<

    • >>

    • - (unary negation)

  • Basic comparison operators:

    • Visual Basic = and C# ==

    • Visual Basic <> and C# !=

    • Visual Basic Is/IsNot

    • <

    • <=

    • >

    • >=

See also

Sequence Operators

Generally speaking, LINQ to SQL does not support sequence operators that have one or more of the following qualities:

  • Take a lambda with an index parameter.

  • Rely on the properties of sequential rows, such as TakeWhile.

  • Rely on an arbitrary CLR implementation, such as IComparer<T>.

Examples of Unsupported
Enumerable.Where<TSource>(IEnumerable<TSource>, Func<TSource,Int32,Boolean>)
Enumerable.Select<TSource,TResult>(IEnumerable<TSource>, Func<TSource,TResult>)
Enumerable.Select<TSource,TResult>(IEnumerable<TSource>, Func<TSource,TResult>)
Enumerable.TakeWhile<TSource>(IEnumerable<TSource>, Func<TSource,Boolean>)
Enumerable.TakeWhile<TSource>(IEnumerable<TSource>, Func<TSource,Int32,Boolean>)
Enumerable.SkipWhile<TSource>(IEnumerable<TSource>, Func<TSource,Boolean>)
Enumerable.SkipWhile<TSource>(IEnumerable<TSource>, Func<TSource,Int32,Boolean>)
Enumerable.GroupBy<TSource,TKey,TElement>(IEnumerable<TSource>, Func<TSource,TKey>, Func<TSource,TElement>, IEqualityComparer<TKey>)
Enumerable.GroupBy<TSource,TKey,TElement,TResult>(IEnumerable<TSource>, Func<TSource,TKey>, Func<TSource,TElement>, Func<TKey,IEnumerable<TElement>,TResult>, IEqualityComparer<TKey>)
Enumerable.Reverse<TSource>(IEnumerable<TSource>)
Enumerable.DefaultIfEmpty<TSource>(IEnumerable<TSource>, TSource)
Enumerable.ElementAt<TSource>(IEnumerable<TSource>, Int32)
Enumerable.ElementAtOrDefault<TSource>(IEnumerable<TSource>, Int32)
Enumerable.Range(Int32, Int32)
Enumerable.Repeat<TResult>(TResult, Int32)
Enumerable.Empty<TResult>()
Enumerable.Contains<TSource>(IEnumerable<TSource>, TSource)
Enumerable.Aggregate<TSource>(IEnumerable<TSource>, Func<TSource,TSource,TSource>)
Enumerable.Aggregate<TSource,TAccumulate>(IEnumerable<TSource>, TAccumulate, Func<TAccumulate,TSource,TAccumulate>)
Enumerable.Aggregate<TSource,TAccumulate,TResult>(IEnumerable<TSource>, TAccumulate, Func<TAccumulate,TSource,TAccumulate>, Func<TAccumulate,TResult>)
Enumerable.SequenceEqual

Differences from .NET

All supported sequence operators work as expected in the common language runtime (CLR) except for Average. Average returns a value of the same type as the type being averaged, whereas in the CLR Average always returns either a Double or a Decimal. If the source argument is explicitly cast to double / decimal or the selector casts to double / decimal, the resulting SQL will also have such a conversion and the result will be as expected.

See also

System.Convert Methods

LINQ to SQL does not support the following Convert methods.

  • Versions with an IFormatProvider parameter.

  • Methods that involve char arrays or byte arrays:

  • The following methods:

    • public static <Type2> To<Type2>(<Type1> value); where

      Type1 and Type2 are each one of sbyte, uint, ulong, or ushort.

    • C#:

      int To<int type>(string value, int fromBase),

      ToString(... value, int toBase)

    • Visual Basic:

      Function To(Of [Numeric])(value as String, fromBase As Integer)

      As [Numeric], ToString( value As …, toBase As Integer)

    • IsDBNull

    • GetTypeCode

    • ChangeType

See also

System.DateTime Methods

The following LINQ to SQL-supported methods, operators, and properties are available to use in LINQ to SQL queries. When a method, operator or property is unsupported, LINQ to SQL cannot translate the member for execution on the SQL Server. You may use these members in your code, however, they must be evaluated before the query is translated to Transact-SQL or after the results have been retrieved from the database.

Supported System.DateTime Members

Once mapped in the object model or external mapping file, LINQ to SQL allows you to call the following System.DateTime members inside LINQ to SQL queries.

Supported DateTime Methods Supported DateTime Operators Supported DateTime Properties
Add Addition Date
AddDays Equality Day
AddHours GreaterThan DayOfWeek
AddMilliseconds GreaterThanOrEqual DayOfYear
AddMinutes Inequality Hour
AddMonths LessThan Millisecond
AddSeconds LessThanOrEqual Minute
AddTicks Subtraction Month
AddYears Now
Compare Second
CompareTo(DateTime) TimeOfDay
Equals(DateTime) Today
Year

Members Not Supported by LINQ to SQL

The following members are not supported inside LINQ to SQL queries.

IsDaylightSavingTime IsLeapYear
DaysInMonth ToBinary
ToFileTime ToFileTimeUtc
ToLongDateString ToLongTimeString
ToOADate ToShortDateString
ToShortTimeString ToUniversalTime
FromBinary UtcNow
FromFileTime FromFileTimeUtc
FromOADate GetDateTimeFormats

Method Translation Example

All methods supported by LINQ to SQL are translated to Transact-SQL before they are sent to SQL Server. For example, consider the following pattern.

(dateTime1 – dateTime2).{Days, Hours, Milliseconds, Minutes, Months, Seconds, Years}

When it is recognized, it is translated into a direct call to the SQL Server DATEDIFF function, as follows:

DATEDIFF({DatePart}, @dateTime1, @dateTime2)

SQLMethods Date and Time Methods

In addition to the methods offered by the DateTime structure, LINQ to SQL offers the methods listed in the following table from the System.Data.Linq.SqlClient.SqlMethods class for working with date and time.

DateDiffDay DateDiffMillisecond DateDiffNanosecond
DateDiffHour DateDiffMinute DateDiffSecond
DateDiffMicrosecond DateDiffMonth DateDiffYear

See also

System.Math Methods

LINQ to SQL does not support the following Math methods.

Differences from .NET

The .NET Framework has different rounding semantics from SQL Server. The Round method in the .NET Framework performs Banker's rounding, where numbers that ends in .5 round to the nearest even digit instead of to the next higher digit. For example, 2.5 rounds to 2, while 3.5 rounds to 4. (This technique helps avoid systematic bias toward higher values in large data transactions.)

In SQL, the ROUND function instead always rounds away from 0. Therefore 2.5 rounds to 3, contrasted with its rounding to 2 in the .NET Framework.

LINQ to SQL passes through to the SQL ROUND semantics and does not try to implement Banker's rounding.

See also

System.Object Methods

LINQ to SQL supports the following Object methods.

Object.Equals(Object) Object.Equals(Object, Object)
Object.ToString()

LINQ to SQL does not support the following Object methods.

Object.GetHashCode() Object.ReferenceEquals(Object, Object)
Object.MemberwiseClone() Object.GetType()
Object.ToString() for binary types such as BINARY, VARBINARY, IMAGE, and TIMESTAMP.

Differences from .NET

The output of Object.ToString() for double uses SQL CONVERT(NVARCHAR(30), @x, 2) on SQL. SQL always uses 16 digits and scientific notation in this case (for example, "0.000000000000000e+000" for 0). As a result, Object.ToString() conversion does not produce the same string as Convert.ToString in the .NET Framework.

See also

System.String Methods

LINQ to SQL does not support the following String methods.

Unsupported System.String Methods in General

Unsupported String methods in general:

  • Culture-aware overloads (methods that take a CultureInfo / StringComparison / IFormatProvider).

  • Methods that take or produce a char array.

Unsupported System.String Static Methods

Unsupported System.String Static Methods
String.Copy(String)
String.Compare(String, String, Boolean)
String.Compare(String, String, Boolean, CultureInfo)
String.Compare(String, Int32, String, Int32, Int32)
String.Compare(String, Int32, String, Int32, Int32, Boolean)
String.Compare(String, Int32, String, Int32, Int32, Boolean, CultureInfo)
String.CompareOrdinal(String, String)
String.CompareOrdinal(String, Int32, String, Int32, Int32)
String.Format
String.Join

Unsupported System.String Non-static Methods

Unsupported System.String Non-static Methods
String.IndexOfAny(Char[])
String.Split
String.ToCharArray()
String.ToUpper(CultureInfo)
String.TrimEnd(Char[])
String.TrimStart(Char[])

Differences from .NET

  • Queries do not account for SQL Server collations that might be in effect on the server, and therefore will provide culture-sensitive, case-insensitive comparisons by default. This behavior differs from the default, case-sensitive semantics of the .NET Framework.

  • When LastIndexOf returns 0, either the string is NULL or the found position is 0.

  • Unexpected results might be returned from concatenation or other operations on fixed-length strings (CHAR, NCHAR), because these types automatically have padding applied in the database.

  • Because many methods, such as Replace, ToLower, ToUpper, and the character indexer, have no valid translation for TEXT or NTEXT columns and XML, SqlExceptions occur if translated normally. This behavior is considered acceptable for these types. However, all string operations must match common language runtime (CLR) semantics for VARCHAR, NVARCHAR, VARCHAR(max), and NVARCHAR(max).

See also

System.TimeSpan Methods

Member support for System.TimeSpan greatly depends on the versions of the .NET Framework and Microsoft SQL Server that you are using.

When a method, operator, or property is unsupported; it means that LINQ to SQL cannot translate the member for execution on the SQL Server. You may still be able to use these members in your code. However, they must be evaluated before the query is translated to Transact-SQL or after the results have been retrieved from the database.

Previous Limitations

When using LINQ to SQL with versions of the .NET Framework prior to .NET Framework 3.5 SP1, you cannot map SQL Server database fields to System.TimeSpan. However, operations on TimeSpan are supported because TimeSpan values can be returned from DateTime subtraction or introduced into an expression as a literal or bound variable.

Supported System.TimeSpan member support

The following LINQ to SQL-supported methods, operators, and properties are available for you to use in your LINQ to SQL queries. Once mapped in the object model or external mapping file, LINQ to SQL allows you to call many of the System.TimeSpan members inside your LINQ to SQL queries.

Supported TimeSpan Methods Supported TimeSpan Operators Supported TimeSpan Properties
Compare Equality Days
CompareTo(TimeSpan) GreaterThan Hours
Duration GreaterThanOrEqual MaxValue
Equals(TimeSpan, TimeSpan) Inequality Milliseconds
Equals(TimeSpan) LessThan Minutes
LessThanOrEqual MinValue

Note

The ability to map System.TimeSpan to a SQL TIME column with LINQ to SQL requires the .NET Framework 3.5 SP1 and beyond. The SQL TIME data type is only available in Microsoft SQL Server 2008 and beyond.

Addition and Subtraction

Although the CLR System.TimeSpan type does support addition and subtraction, the SQL TIME type does not. Because of this, your LINQ to SQL queries will generate errors if they attempt addition and subtraction when they are mapped to the SQL TIME type. You can find other considerations for working with SQL date and time types in SQL-CLR Type Mapping.

See also

System.DateTimeOffset Methods

Once mapped in the object model or external mapping file, LINQ to SQL allows you to call most of the System.DateTimeOffset methods, operators, and properties from within your LINQ to SQL queries.

The only methods not supported are those inherited from System.Object that do not make sense in the context of LINQ to SQL queries, such as: Finalize, GetHashCode, GetType, and MemberwiseClone. These methods are not supported because LINQ to SQL cannot translate them for execution on the SQL Server.

Note

The common language runtime (CLR) System.DateTimeOffset structure, and the ability to map it to a SQL DATETIMEOFFSET column with LINQ to SQL, requires the .NET Framework 3.5 SP1 or beyond. The SQL DATETIMEOFFSET column is only available in Microsoft SQL Server 2008 and beyond.

SQLMethods Date and Time Methods

In addition to the methods offered by the DateTimeOffset structure, LINQ to SQL offers the methods listed in the following table from the System.Data.Linq.SqlClient.SqlMethods class for working with date and time.

DateDiffDay DateDiffMillisecond DateDiffNanosecond
DateDiffHour DateDiffMinute DateDiffSecond
DateDiffMicrosecond DateDiffMonth DateDiffYear

See also

Attribute-Based Mapping

LINQ to SQL maps a SQL Server database to a LINQ to SQL object model by either applying attributes or by using an external mapping file. This topic outlines the attribute-based approach.

In its most elementary form, LINQ to SQL maps a database to a DataContext, a table to a class, and columns and relationships to properties on those classes. You can also use attributes to map an inheritance hierarchy in your object model. For more information, see How to: Generate the Object Model in Visual Basic or C#.

Developers using Visual Studio typically perform attribute-based mapping by using the Object Relational Designer. You can also use the SQLMetal command-line tool, or you can hand-code the attributes yourself. For more information, see How to: Generate the Object Model in Visual Basic or C#.

Note

You can also map by using an external XML file. For more information, see External Mapping.

The following sections describe attribute-based mapping in more detail. For more information, see the System.Data.Linq.Mapping namespace.

DatabaseAttribute Attribute

Use this attribute to specify the default name of the database when a name is not supplied by the connection. This attribute is optional, but if you use it, you must apply the Name property, as described in the following table.

Property Type Default Description
Name String See Name Used with its Name property, specifies the name of the database.

For more information, see DatabaseAttribute.

TableAttribute Attribute

Use this attribute to designate a class as an entity class that is associated with a database table or view. LINQ to SQL treats classes that have this attribute as persistent classes. The following table describes the Name property.

Property Type Default Description
Name String Same string as class name Designates a class as an entity class associated with a database table.

For more information, see TableAttribute.

ColumnAttribute Attribute

Use this attribute to designate a member of an entity class to represent a column in a database table. You can apply this attribute to any field or property.

Only those members you identify as columns are retrieved and persisted when LINQ to SQL saves changes to the database. Members without this attribute are assumed to be non-persistent and are not submitted for inserts or updates.

The following table describes properties of this attribute.

Property Type Default Description
AutoSync AutoSync Never Instructs the common language runtime (CLR) to retrieve the value after an insert or update operation.

Options: Always, Never, OnUpdate, OnInsert.
CanBeNull Boolean true Indicates that a column can contain null values.
DbType String Inferred database column type Uses database types and modifiers to specify the type of the database column.
Expression String Empty Defines a computed column in a database.
IsDbGenerated Boolean false Indicates that a column contains values that the database auto-generates.
IsDiscriminator Boolean false Indicates that the column contains a discriminator value for a LINQ to SQL inheritance hierarchy.
IsPrimaryKey Boolean false Specifies that this class member represents a column that is or is part of the primary keys of the table.
IsVersion Boolean false Identifies the column type of the member as a database timestamp or version number.
UpdateCheck UpdateCheck Always, unless IsVersion is true for a member Specifies how LINQ to SQL approaches the detection of optimistic concurrency conflicts.

For more information, see ColumnAttribute.

Note

AssociationAttribute and ColumnAttribute Storage property values are case sensitive. For example, ensure that values used in the attribute for the AssociationAttribute.Storage property match the case for the corresponding property names used elsewhere in the code. This applies to all .NET programming languages, even those which are not typically case sensitive, including Visual Basic. For more information about the Storage property, see DataAttribute.Storage.

AssociationAttribute Attribute

Use this attribute to designate a property to represent an association in the database, such as a foreign key to primary key relationship. For more information about relationships, see How to: Map Database Relationships.

The following table describes properties of this attribute.

Property Type Default Description
DeleteOnNull Boolean false When placed on an association whose foreign key members are all non-nullable, deletes the object when the association is set to null.
DeleteRule String None Adds delete behavior to an association.
IsForeignKey Boolean false If true, designates the member as the foreign key in an association representing a database relationship.
IsUnique Boolean false If true, indicates a uniqueness constraint on the foreign key.
OtherKey String ID of the related class Designates one or more members of the target entity class as key values on the other side of the association.
ThisKey String ID of the containing class Designates members of this entity class to represent the key values on this side of the association.

For more information, see AssociationAttribute.

Note

AssociationAttribute and ColumnAttribute Storage property values are case sensitive. For example, ensure that values used in the attribute for the AssociationAttribute.Storage property match the case for the corresponding property names used elsewhere in the code. This applies to all .NET programming languages, even those which are not typically case sensitive, including Visual Basic. For more information about the Storage property, see DataAttribute.Storage.

InheritanceMappingAttribute Attribute

Use this attribute to map an inheritance hierarchy.

The following table describes properties of this attribute.

Property Type Default Description
Code String None. Value must be supplied. Specifies the code value of the discriminator.
IsDefault Boolean false If true, instantiates an object of this type when no discriminator value in the store matches any one of the specified values.
Type Type None. Value must be supplied. Specifies the type of the class in the hierarchy.

For more information, see InheritanceMappingAttribute.

FunctionAttribute Attribute

Use this attribute to designate a method as representing a stored procedure or user-defined function in the database.

The following table describes the properties of this attribute.

Property Type Default Description
IsComposable Boolean false If false, indicates mapping to a stored procedure. If true, indicates mapping to a user-defined function.
Name String Same string as name in the database Specifies the name of the stored procedure or user-defined function.

For more information, see FunctionAttribute.

ParameterAttribute Attribute

Use this attribute to map input parameters on stored procedure methods.

The following table describes properties of this attribute.

Property Type Default Description
DbType String None Specifies database type.
Name String Same string as parameter name in database Specifies a name for the parameter.

For more information, see ParameterAttribute.

ResultTypeAttribute Attribute

Use this attribute to specify a result type.

The following table describes properties of this attribute.

Property Type Default Description
Type Type (None) Used on methods mapped to stored procedures that return IMultipleResults. Declares the valid or expected type mappings for the stored procedure.

For more information, see ResultTypeAttribute.

DataAttribute Attribute

Use this attribute to specify names and private storage fields.

The following table describes properties of this attribute.

Property Type Default Description
Name String Same as name in database Specifies the name of the table, column, and so on.
Storage String Public accessors Specifies the name of the underlying storage field.

For more information, see DataAttribute.

See also

Code Generation in LINQ to SQL

You can generate code to represent a database by using either the Object Relational Designer or the SQLMetal command-line tool. In either case, end-to-end code generation occurs in three stages:

  1. The DBML Extractor extracts schema information from the database and reassembles the information into an XML-formatted DBML file.

  2. The DBML file is scanned by the DBML Validator for errors.

  3. If no validation errors appear, the file is passed to the Code Generator.

For more information, see SqlMetal.exe (Code Generation Tool). Developers using Visual Studio can also use the Object Relational Designer to generate code. See LINQ to SQL Tools in Visual Studio.

DBML Extractor

The DBML Extractor is a LINQ to SQL component that takes database metadata as input and produces a DBML file as output.

Code Generator

The Code Generator is a LINQ to SQL component that translates DBML files to Visual Basic, C#, or XML mapping files.

XML Schema Definition File

The DBML file must be valid against the following schema definition as an XSD file.

Distinguish this schema definition file from the schema definition file that is used to validate an external mapping file. For more information, see External Mapping).

Note

Visual Studio users will also find this XSD file in the XML Schemas dialog box as "DbmlSchema.xsd". To use the XSD file correctly for validating a DBML file, see How to: Validate DBML and External Mapping Files.

?<?xml version="1.0" encoding="utf-16"?>  
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://schemas.microsoft.com/linqtosql/dbml/2007" xmlns="http://schemas.microsoft.com/linqtosql/dbml/2007"  
elementFormDefault="qualified" >  
  <xs:element name="Database" type="Database" />  
  <xs:complexType name="Database">  
    <xs:sequence>  
      <xs:element name="Connection" type="Connection" minOccurs="0" maxOccurs="1" />  
      <xs:element name="Table" type="Table" minOccurs="0" maxOccurs="unbounded" />  
      <xs:element name="Function" type="Function" minOccurs="0" maxOccurs="unbounded" />  
    </xs:sequence>  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="EntityNamespace" type="xs:string" use="optional" />  
    <xs:attribute name="ContextNamespace" type="xs:string" use="optional" />  
    <xs:attribute name="Class" type="xs:string" use="optional" />  
    <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" />  
    <xs:attribute name="Modifier" type="ClassModifier" use="optional" />  
    <xs:attribute name="BaseType" type="xs:string" use="optional" />  
    <xs:attribute name="Provider" type="xs:string" use="optional" />  
    <xs:attribute name="ExternalMapping" type="xs:boolean" use="optional" />  
    <xs:attribute name="Serialization" type="SerializationMode" use="optional" />  
    <xs:attribute name="EntityBase" type="xs:string" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Table">  
    <xs:all>  
      <xs:element name="Type" type="Type" minOccurs="1" maxOccurs="1" />  
      <xs:element name="InsertFunction" type="TableFunction" minOccurs="0" maxOccurs="1" />  
      <xs:element name="UpdateFunction" type="TableFunction" minOccurs="0" maxOccurs="1" />  
      <xs:element name="DeleteFunction" type="TableFunction" minOccurs="0" maxOccurs="1" />  
    </xs:all>  
    <xs:attribute name="Name" type="xs:string" use="required" />  
    <xs:attribute name="Member" type="xs:string" use="optional" />  
    <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" />  
    <xs:attribute name="Modifier" type="MemberModifier" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Type">  
    <xs:sequence>  
      <xs:choice minOccurs="0" maxOccurs="unbounded">  
        <xs:element name="Column" type="Column" minOccurs="0" maxOccurs="unbounded" />  
        <xs:element name="Association" type="Association" minOccurs="0" maxOccurs="unbounded" />  
      </xs:choice>  
      <xs:element name="Type" type="Type" minOccurs="0" maxOccurs="unbounded" />  
    </xs:sequence>  
    <xs:attribute name="IdRef" type="xs:IDREF" use="optional" />  
    <xs:attribute name="Id" type="xs:ID" use="optional" />  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="InheritanceCode" type="xs:string" use="optional" />  
    <xs:attribute name="IsInheritanceDefault" type="xs:boolean" use="optional" />  
    <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" />  
    <xs:attribute name="Modifier" type="ClassModifier" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Column">  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="Member" type="xs:string" use="optional" />  
    <xs:attribute name="Storage" type="xs:string" use="optional" />  
    <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" />  
    <xs:attribute name="Modifier" type="MemberModifier" use="optional" />  
    <xs:attribute name="Type" type="xs:string" use="required" />  
    <xs:attribute name="DbType" type="xs:string" use="optional" />  
    <xs:attribute name="IsReadOnly" type="xs:boolean" use="optional" />  
    <xs:attribute name="IsPrimaryKey" type="xs:boolean" use="optional" />  
    <xs:attribute name="IsDbGenerated" type="xs:boolean" use="optional" />  
    <xs:attribute name="CanBeNull" type="xs:boolean" use="optional" />  
    <xs:attribute name="UpdateCheck" type="UpdateCheck" use="optional" />  
    <xs:attribute name="IsDiscriminator" type="xs:boolean" use="optional" />  
    <xs:attribute name="Expression" type="xs:string" use="optional" />  
    <xs:attribute name="IsVersion" type="xs:boolean" use="optional" />  
    <xs:attribute name="IsDelayLoaded" type="xs:boolean" use="optional" />  
    <xs:attribute name="AutoSync" type="AutoSync" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Association">  
    <xs:attribute name="Name" type="xs:string" use="required" />  
    <xs:attribute name="Member" type="xs:string" use="required" />  
    <xs:attribute name="Storage" type="xs:string" use="optional" />  
    <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" />  
    <xs:attribute name="Modifier" type="MemberModifier" use="optional" />  
    <xs:attribute name="Type" type="xs:string" use="required" />  
    <xs:attribute name="ThisKey" type="xs:string" use="optional" />  
    <xs:attribute name="OtherKey" type="xs:string" use="optional" />  
    <xs:attribute name="IsForeignKey" type="xs:boolean" use="optional" />  
    <xs:attribute name="Cardinality" type="Cardinality" use="optional" />  
    <xs:attribute name="DeleteRule" type="xs:string" use="optional" />  
    <xs:attribute name="DeleteOnNull" type="xs:boolean" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Function">  
    <xs:sequence>  
      <xs:element name="Parameter" type="Parameter" minOccurs="0" maxOccurs="unbounded" />  
      <xs:choice>  
        <xs:element name="ElementType" type="Type" minOccurs="0" maxOccurs="unbounded" />  
        <xs:element name="Return" type="Return" minOccurs="0" maxOccurs="1" />  
      </xs:choice>  
    </xs:sequence>  
    <xs:attribute name="Name" type="xs:string" use="required" />  
    <xs:attribute name="Id" type="xs:ID" use="optional" />  
    <xs:attribute name="Method" type="xs:string" use="optional" />  
    <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" />  
    <xs:attribute name="Modifier" type="MemberModifier" use="optional" />  
    <xs:attribute name="HasMultipleResults" type="xs:boolean" use="optional" />  
    <xs:attribute name="IsComposable" type="xs:boolean" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="TableFunction">  
    <xs:sequence>  
      <xs:element name="Argument" type="TableFunctionParameter" minOccurs="0" maxOccurs="unbounded" />  
      <xs:element name="Return" type="TableFunctionReturn" minOccurs="0" maxOccurs="1" />  
    </xs:sequence>  
    <xs:attribute name="FunctionId" type="xs:IDREF" use="required" />  
    <xs:attribute name="AccessModifier" type="AccessModifier" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Parameter">  
    <xs:attribute name="Name" type="xs:string" use="required" />  
    <xs:attribute name="Parameter" type="xs:string" use="optional" />  
    <xs:attribute name="Type" type="xs:string" use="required" />  
    <xs:attribute name="DbType" type="xs:string" use="optional" />  
    <xs:attribute name="Direction" type="ParameterDirection" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Return">  
    <xs:attribute name="Type" type="xs:string" use="required" />  
    <xs:attribute name="DbType" type="xs:string" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="TableFunctionParameter">  
    <xs:attribute name="Parameter" type="xs:string" use="required" />  
    <xs:attribute name="Member" type="xs:string" use="required" />  
    <xs:attribute name="Version" type="Version" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="TableFunctionReturn">  
    <xs:attribute name="Member" type="xs:string" use="required" />  
  </xs:complexType>  
  <xs:complexType name="Connection">  
    <xs:attribute name="Provider" type="xs:string" use="required" />  
    <xs:attribute name="Mode" type="ConnectionMode" use="optional" />  
    <xs:attribute name="ConnectionString" type="xs:string" use="optional" />  
    <xs:attribute name="SettingsObjectName" type="xs:string" use="optional" />  
    <xs:attribute name="SettingsPropertyName" type="xs:string" use="optional" />  
  </xs:complexType>  
  <xs:simpleType name="ConnectionMode">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="ConnectionString" />  
      <xs:enumeration value="AppSettings" />  
      <xs:enumeration value="WebSettings" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="AccessModifier">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="Public" />  
      <xs:enumeration value="Internal" />  
      <xs:enumeration value="Protected" />  
      <xs:enumeration value="ProtectedInternal" />  
      <xs:enumeration value="Private" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="UpdateCheck">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="Always" />  
      <xs:enumeration value="Never" />  
      <xs:enumeration value="WhenChanged" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="SerializationMode">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="None" />  
      <xs:enumeration value="Unidirectional" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="ParameterDirection">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="In" />  
      <xs:enumeration value="Out" />  
      <xs:enumeration value="InOut" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="Version">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="Current" />  
      <xs:enumeration value="Original" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="AutoSync">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="Never" />  
      <xs:enumeration value="OnInsert" />  
      <xs:enumeration value="OnUpdate" />  
      <xs:enumeration value="Always" />  
      <xs:enumeration value="Default" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="ClassModifier">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="Sealed" />  
      <xs:enumeration value="Abstract" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="MemberModifier">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="Virtual" />  
      <xs:enumeration value="Override" />  
      <xs:enumeration value="New" />  
      <xs:enumeration value="NewVirtual" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="Cardinality">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="One" />  
      <xs:enumeration value="Many" />  
    </xs:restriction>  
  </xs:simpleType>  
</xs:schema>  

Sample DBML File

The following code is an excerpt from the DBML file created from the Northwind sample database. You can generate the whole file by using SQLMetal with the /xml option. For more information, see SqlMetal.exe (Code Generation Tool).

XML
<?xml version="1.0" encoding="utf-16"?>  
<Database Name="northwnd" Class="Northwnd" xmlns="http://schemas.microsoft.com/dsltools/DLinqML">  
  
  <Table Name="Customers">  
    <Type Name="Customer">  
      <Column Name="CustomerID" Type="System.String" DbType="NChar(5) NOT NULL" IsPrimaryKey="True" CanBeNull="False" />  
      <Column Name="CompanyName" Type="System.String" DbType="NVarChar(40) NOT NULL" CanBeNull="False" />  
      <Column Name="ContactName" Type="System.String" DbType="NVarChar(30)" CanBeNull="True" />  
      <Column Name="ContactTitle" Type="System.String" DbType="NVarChar(30)" CanBeNull="True" />  
      <Column Name="Address" Type="System.String" DbType="NVarChar(60)" CanBeNull="True" />  
      <Column Name="City" Type="System.String" DbType="NVarChar(15)" CanBeNull="True" />  
      <Column Name="Region" Type="System.String" DbType="NVarChar(15)" CanBeNull="True" />  
      <Column Name="PostalCode" Type="System.String" DbType="NVarChar(10)" CanBeNull="True" />  
      <Column Name="Country" Type="System.String" DbType="NVarChar(15)" CanBeNull="True" />  
      <Column Name="Phone" Type="System.String" DbType="NVarChar(24)" CanBeNull="True" />  
      <Column Name="Fax" Type="System.String" DbType="NVarChar(24)" CanBeNull="True" />  
      <Association Name="FK_CustomerCustomerDemo_Customers" Member="CustomerCustomerDemos" ThisKey="CustomerID" OtherKey="CustomerID" OtherTable="CustomerCustomerDemo" DeleteRule="NO ACTION" />  
      <Association Name="FK_Orders_Customers" Member="Orders" ThisKey="CustomerID" OtherKey="CustomerID" OtherTable="Orders" DeleteRule="NO ACTION" />  
    </Type>  
  </Table>  
</Database>  

See also

External Mapping

LINQ to SQL supports external mapping, a process by which you use a separate XML file to specify mapping between the data model of the database and your object model. Advantages of using an external mapping file include the following:

  • You can keep your mapping code out of your application code. This approach reduces clutter in your application code.

  • You can treat an external mapping file something like a configuration file. For example, you can update how your application behaves after shipping the binaries by just swapping out the external mapping file.

Requirements

The mapping file must be an XML file, and the file must validate against a LINQ to SQL schema definition (.xsd) file.

The following rules apply:

  • The mapping file must be an XML file.

  • The XML mapping file must be valid against the XML schema definition file. For more information, see How to: Validate DBML and External Mapping Files.

  • External mapping overrides attribute-based mapping. In other words, when you use an external mapping source to create a DataContext, the DataContext ignores all mapping attributes you have created on classes. This behavior is true whether the class is included in the external mapping file.

  • LINQ to SQL does not support the hybrid use of the two mapping approaches (attribute-based and external).

XML Schema Definition File

External mapping in LINQ to SQL must be valid against the following XML schema definition.

Distinguish this schema definition file from the schema definition file that is used to validate a DBML file. For more information, see Code Generation in LINQ to SQL).

Note

Visual Studio users will also find this XSD file in the XML Schemas dialog box as "LinqToSqlMapping.xsd". To use this file correctly for validating an external mapping file, see How to: Validate DBML and External Mapping Files.

?<?xml version="1.0" encoding="utf-16"?>  
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://schemas.microsoft.com/linqtosql/mapping/2007" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"  
elementFormDefault="qualified" >  
  <xs:element name="Database" type="Database" />  
  <xs:complexType name="Database">  
    <xs:sequence>  
      <xs:element name="Table" type="Table" minOccurs="0" maxOccurs="unbounded" />  
      <xs:element name="Function" type="Function" minOccurs="0" maxOccurs="unbounded" />  
    </xs:sequence>  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="Provider" type="xs:string" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Table">  
    <xs:sequence>  
      <xs:element name="Type" type="Type" minOccurs="1" maxOccurs="1" />  
    </xs:sequence>  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="Member" type="xs:string" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Type">  
    <xs:sequence>  
      <xs:choice minOccurs="0" maxOccurs="unbounded">  
        <xs:element name="Column" type="Column" minOccurs="0" maxOccurs="unbounded" />  
        <xs:element name="Association" type="Association" minOccurs="0" maxOccurs="unbounded" />  
      </xs:choice>  
      <xs:element name="Type" type="Type" minOccurs="0" maxOccurs="unbounded" />  
    </xs:sequence>  
    <xs:attribute name="Name" type="xs:string" use="required" />  
    <xs:attribute name="InheritanceCode" type="xs:string" use="optional" />  
    <xs:attribute name="IsInheritanceDefault" type="xs:boolean" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Column">  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="Member" type="xs:string" use="required" />  
    <xs:attribute name="Storage" type="xs:string" use="optional" />  
    <xs:attribute name="DbType" type="xs:string" use="optional" />  
    <xs:attribute name="IsPrimaryKey" type="xs:boolean" use="optional" />  
    <xs:attribute name="IsDbGenerated" type="xs:boolean" use="optional" />  
    <xs:attribute name="CanBeNull" type="xs:boolean" use="optional" />  
    <xs:attribute name="UpdateCheck" type="UpdateCheck" use="optional" />  
    <xs:attribute name="IsDiscriminator" type="xs:boolean" use="optional" />  
    <xs:attribute name="Expression" type="xs:string" use="optional" />  
    <xs:attribute name="IsVersion" type="xs:boolean" use="optional" />  
    <xs:attribute name="AutoSync" type="AutoSync" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Association">  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="Member" type="xs:string" use="required" />  
    <xs:attribute name="Storage" type="xs:string" use="optional" />  
    <xs:attribute name="ThisKey" type="xs:string" use="optional" />  
    <xs:attribute name="OtherKey" type="xs:string" use="optional" />  
    <xs:attribute name="IsForeignKey" type="xs:boolean" use="optional" />  
    <xs:attribute name="IsUnique" type="xs:boolean" use="optional" />  
    <xs:attribute name="DeleteRule" type="xs:string" use="optional" />  
    <xs:attribute name="DeleteOnNull" type="xs:boolean" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Function">  
    <xs:sequence>  
      <xs:element name="Parameter" type="Parameter" minOccurs="0" maxOccurs="unbounded" />  
      <xs:choice>  
        <xs:element name="ElementType" type="Type" minOccurs="0" maxOccurs="unbounded" />  
        <xs:element name="Return" type="Return" minOccurs="0" maxOccurs="1" />  
      </xs:choice>  
    </xs:sequence>  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="Method" type="xs:string" use="required" />  
    <xs:attribute name="IsComposable" type="xs:boolean" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Parameter">  
    <xs:attribute name="Name" type="xs:string" use="optional" />  
    <xs:attribute name="Parameter" type="xs:string" use="required" />  
    <xs:attribute name="DbType" type="xs:string" use="optional" />  
    <xs:attribute name="Direction" type="ParameterDirection" use="optional" />  
  </xs:complexType>  
  <xs:complexType name="Return">  
    <xs:attribute name="DbType" type="xs:string" use="optional" />  
  </xs:complexType>  
  <xs:simpleType name="UpdateCheck">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="Always" />  
      <xs:enumeration value="Never" />  
      <xs:enumeration value="WhenChanged" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="ParameterDirection">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="In" />  
      <xs:enumeration value="Out" />  
      <xs:enumeration value="InOut" />  
    </xs:restriction>  
  </xs:simpleType>  
  <xs:simpleType name="AutoSync">  
    <xs:restriction base="xs:string">  
      <xs:enumeration value="Never" />  
      <xs:enumeration value="OnInsert" />  
      <xs:enumeration value="OnUpdate" />  
      <xs:enumeration value="Always" />  
      <xs:enumeration value="Default" />  
    </xs:restriction>  
  </xs:simpleType>  
</xs:schema>  

See also

Frequently Asked Questions

The following sections answer some common issues that you might encounter when you implement LINQ.

Additional issues are addressed in Troubleshooting.

Cannot Connect

Q. I cannot connect to my database.

A. Make sure your connection string is correct and that your SQL Server instance is running. Note also that LINQ to SQL requires the Named Pipes protocol to be enabled. For more information, see Learning by Walkthroughs.

Changes to Database Lost

Q. I made a change to data in the database, but when I reran my application, the change was no longer there.

A. Make sure that you call SubmitChanges to save results to the database.

Database Connection: Open How Long?

Q. How long does my database connection remain open?

A. A connection typically remains open until you consume the query results. If you expect to take time to process all the results and are not opposed to caching the results, apply ToList to the query. In common scenarios where each object is processed only one time, the streaming model is superior in both DataReader and LINQ to SQL.

The exact details of connection usage depend on the following:

  • Connection status if the DataContext is constructed with a connection object.

  • Connection string settings (for example, enabling Multiple Active Result Sets (MARS). For more information, see Multiple Active Result Sets (MARS).

Updating Without Querying

Q. Can I update table data without first querying the database?

A. Although LINQ to SQL does not have set-based update commands, you can use either of the following techniques to update without first querying:

  • Use ExecuteCommand to send SQL code.

  • Create a new instance of the object and initialize all the current values (fields) that affect the update. Then attach the object to the DataContext by using Attach and modify the field you want to change.

Unexpected Query Results

Q. My query is returning unexpected results. How can I inspect what is occurring?

A. LINQ to SQL provides several tools for inspecting the SQL code it generates. One of the most important is Log. For more information, see Debugging Support.

Unexpected Stored Procedure Results

Q. I have a stored procedure whose return value is calculated by MAX(). When I drag the stored procedure to the O/R Designer surface, the return value is not correct.

A. LINQ to SQL provides two ways to return database-generated values by way of stored procedures:

  • By naming the output result.

  • By explicitly specifying an output parameter.

The following is an example of incorrect output. Because LINQ to SQL cannot map the results, it always returns 0:

create procedure proc2

as

begin

select max(i) from t where name like 'hello'

end

The following is an example of correct output by using an output parameter:

create procedure proc2

@result int OUTPUT

as

select @result = MAX(i) from t where name like 'hello'

go

The following is an example of correct output by naming the output result:

create procedure proc2

as

begin

select nax(i) AS MaxResult from t where name like 'hello'

end

For more information, see Customizing Operations By Using Stored Procedures.

Serialization Errors

Q. When I try to serialize, I get the following error: "Type 'System.Data.Linq.ChangeTracker+StandardChangeTracker' ... is not marked as serializable."

A. Code generation in LINQ to SQL supports DataContractSerializer serialization. It does not support XmlSerializer or BinaryFormatter. For more information, see Serialization.

Multiple DBML Files

Q. When I have multiple DBML files that share some tables in common, I get a compiler error.

A. Set the Context Namespace and Entity Namespace properties from the Object Relational Designer to a distinct value for each DBML file. This approach eliminates the name/namespace collision.

Avoiding Explicit Setting of Database-Generated Values on Insert or Update

Q. I have a database table with a DateCreated column that defaults to SQL Getdate(). When I try to insert a new record by using LINQ to SQL, the value gets set to NULL. I would expect it to be set to the database default.

A. LINQ to SQL handles this situation automatically for identity (auto-increment) and rowguidcol (database-generated GUID) and timestamp columns. In other cases, you should manually set IsDbGenerated=true and AutoSync=Always/OnInsert/OnUpdate properties.

Multiple DataLoadOptions

Q. Can I specify additional load options without overwriting the first?

A. Yes. The first is not overwritten, as in the following example:

C#
DataLoadOptions dlo = new DataLoadOptions();  
dlo.LoadWith<Order>(o => o.Customer);  
dlo.LoadWith<Order>(o => o.OrderDetails);  

Errors Using SQL Compact 3.5

Q. I get an error when I drag tables out of a SQL Server Compact 3.5 database.

A. The Object Relational Designer does not support SQL Server Compact 3.5, although the LINQ to SQL runtime does. In this situation, you must create your own entity classes and add the appropriate attributes.

Errors in Inheritance Relationships

Q. I used the toolbox inheritance shape in the Object Relational Designer to connect two entities, but I get errors.

A. Creating the relationship is not enough. You must provide information such as the discriminator column, base class discriminator value, and derived class discriminator value.

Provider Model

Q. Is a public provider model available?

A. No public provider model is available. At this time, LINQ to SQL supports SQL Server and SQL Server Compact 3.5 only.

SQL-Injection Attacks

Q. How is LINQ to SQL protected from SQL-injection attacks?

A. SQL injection has been a significant risk for traditional SQL queries formed by concatenating user input. LINQ to SQL avoids such injection by using SqlParameter in queries. User input is turned into parameter values. This approach prevents malicious commands from being used from customer input.

Changing Read-only Flag in DBML Files

Q. How do I eliminate setters from some properties when I create an object model from a DBML file?

A. Take the following steps for this advanced scenario:

  1. In the .dbml file, modify the property by changing the IsReadOnly flag to True.

  2. Add a partial class. Create a constructor with parameters for the read-only members.

  3. Review the default UpdateCheck value (Never) to determine whether that is the correct value for your application.

    Caution

    If you are using the Object Relational Designer in Visual Studio, your changes might be overwritten.

APTCA

Q. Is System.Data.Linq marked for use by partially trusted code?

A. Yes, the System.Data.Linq.dll assembly is among those .NET Framework assemblies marked with the AllowPartiallyTrustedCallersAttribute attribute. Without this marking, assemblies in the .NET Framework are intended for use only by fully trusted code.

The principal scenario in LINQ to SQL for allowing partially trusted callers is to enable the LINQ to SQL assembly to be accessed from Web applications, where the trust configuration is Medium.

Mapping Data from Multiple Tables

Q. The data in my entity comes from multiple tables. How do I map it?

A. You can create a view in a database and map the entity to the view. LINQ to SQL generates the same SQL for views as it does for tables.

Note

The use of views in this scenario has limitations. This approach works most safely when the operations performed on Table<TEntity> are supported by the underlying view. Only you know which operations are intended. For example, most applications are read-only, and another sizeable number perform Create/Update/Delete operations only by using stored procedures against views.

Connection Pooling

Q. Is there a construct that can help with DataContext pooling?

A. Do not try to reuse instances of DataContext. Each DataContext maintains state (including an identity cache) for one particular edit/query session. To obtain new instances based on the current state of the database, use a new DataContext.

You can still use underlying ADO.NET connection pooling. For more information, see SQL Server Connection Pooling (ADO.NET).

Second DataContext Is Not Updated

Q. I used one instance of DataContext to store values in the database. However, a second DataContext on the same database does not reflect the updated values. The second DataContext instance seems to return cached values.

A. This behavior is by design. LINQ to SQL continues to return the same instances/values that you saw in the first instance. When you make updates, you use optimistic concurrency. The original data is used to check against the current database state to assert that it is in fact still unchanged. If it has changed, a conflict occurs and your application must resolve it. One option of your application is to reset the original state to the current database state and to try the update again. For more information, see How to: Manage Change Conflicts.

You can also set ObjectTrackingEnabled to false, which turns off caching and change tracking. You can then retrieve the latest values every time that you query.

Cannot Call SubmitChanges in Read-only Mode

Q. When I try to call SubmitChanges in read-only mode, I get an error.

A. Read-only mode turns off the ability of the context to track changes.

See also

SQL Server Compact and LINQ to SQL

SQL Server Compact is the default database installed with Visual Studio. For more information, see Using SQL Server Compact (Visual Studio).

This topic outlines the key differences in usage, configuration, feature sets, and scope of LINQ to SQL support.

Characteristics of SQL Server Compact in Relation to LINQ to SQL

By default, SQL Server Compact is installed for all Visual Studio editions, and is therefore available on the development computer for use with LINQ to SQL. But deployment of an application that uses SQL Server Compact and LINQ to SQL differs from that for a SQL Server application. SQL Server Compact is not a part of the .NET Framework, and therefore must be packaged with the application or downloaded separately from the Microsoft site.

Note the following characteristics:

  • SQL Server Compact is packaged as a DLL that can be used against database files (.sdf extension) directly.

  • SQL Server Compact runs in the same process as the client application. The efficiency of communication with SQL Server Compact can therefore be significantly higher than communicating with SQL Server. On the other hand, SQL Server Compact does require interoperability between managed and unmanaged code with its attendant costs.

  • The size of the SQL Server Compact DLL is small. This feature reduces the overall application size.

  • The LINQ to SQL runtime and the SQLMetal command-line tool support SQL Server Compact.

  • The Object Relational Designer does not support SQL Server Compact.

Feature Set

The SQL Server Compact feature set is much simpler than the feature set of SQL Server in the following ways that can affect LINQ to SQL applications :

  • SQL Server Compact does not support stored procedures or views.

  • SQL Server Compact supports only a subset of data types and SQL functions.

  • SQL Server Compact supports only a subset of SQL constructs.

  • SQL Server Compact provides only a minimal optimizer. It is possible that some queries might time out.

  • SQL Server Compact does not support partial trust.

See also

Standard Query Operator Translation

LINQ to SQL translates Standard Query Operators to SQL commands. The query processor of the database determines the execution semantics of SQL translation.

Standard Query Operators are defined against sequences. A sequence is ordered and relies on reference identity for each element of the sequence. For more information, see Standard Query Operators Overview (C#) or Standard Query Operators Overview (Visual Basic).

SQL deals primarily with unordered sets of values. Ordering is typically an explicitly stated, post-processing operation that is applied to the final result of a query rather than to intermediate results. Identity is defined by values. For this reason, SQL queries are understood to deal with multisets (bags) instead of sets.

The following paragraphs describe the differences between the Standard Query Operators and their SQL translation for the SQL Server provider for LINQ to SQL.

Operator Support

Concat

The Concat method is defined for ordered multisets where the order of the receiver and the order of the argument are the same. Concat works as UNION ALL over the multisets followed by the common order.

The final step is ordering in SQL before results are produced. Concat does not preserve the order of its arguments. To ensure appropriate ordering, you must explicitly order the results of Concat.

Intersect, Except, Union

The Intersect and Except methods are well defined only on sets. The semantics for multisets is undefined.

The Union method is defined for multisets as the unordered concatenation of the multisets (effectively the result of the UNION ALL clause in SQL).

Take, Skip

Take and Skip methods are well defined only against ordered sets. The semantics for unordered sets or multisets are undefined.

Note

Take and Skip have certain limitations when they are used in queries against SQL Server 2000. For more information, see the "Skip and Take Exceptions in SQL Server 2000" entry in Troubleshooting.

Because of limitations on ordering in SQL, LINQ to SQL tries to move the ordering of the argument of these methods to the result of the method. For example, consider the following LINQ to SQL query:

C#
var custQuery = 
    (from cust in db.Customers
    where cust.City == "London"
    orderby cust.CustomerID
    select cust).Skip(1).Take(1);

The generated SQL for this code moves the ordering to the end, as follows:

SQL
SELECT TOP 1 [t0].[CustomerID], [t0].[CompanyName],
FROM [Customers] AS [t0]
WHERE (NOT (EXISTS(
    SELECT NULL AS [EMPTY]
    FROM (
        SELECT TOP 1 [t1].[CustomerID]
        FROM [Customers] AS [t1]
        WHERE [t1].[City] = @p0
        ORDER BY [t1].[CustomerID]
        ) AS [t2]
    WHERE [t0].[CustomerID] = [t2].[CustomerID]
    ))) AND ([t0].[City] = @p1)
ORDER BY [t0].[CustomerID]

It becomes obvious that all the specified ordering must be consistent when Take and Skip are chained together. Otherwise, the results are undefined.

Both Take and Skip are well-defined for non-negative, constant integral arguments based on the Standard Query Operator specification.

Operators with No Translation

The following methods are not translated by LINQ to SQL. The most common reason is the difference between unordered multisets and sequences.

Operators Rationale
TakeWhile, SkipWhile SQL queries operate on multisets, not on sequences. ORDER BY must be the last clause applied to the results. For this reason, there is no general-purpose translation for these two methods.
Reverse Translation of this method is possible for an ordered set but is not currently translated by LINQ to SQL.
Last, LastOrDefault Translation of these methods is possible for an ordered set but is not currently translated by LINQ to SQL.
ElementAt, ElementAtOrDefault SQL queries operate on multisets, not on indexable sequences.
DefaultIfEmpty (overload with default arg) In general, a default value cannot be specified for an arbitrary tuple. Null values for tuples are possible in some cases through outer joins.

Expression Translation

Null semantics

LINQ to SQL does not impose null comparison semantics on SQL. Comparison operators are syntactically translated to their SQL equivalents. For this reason, the semantics reflect SQL semantics that are defined by server or connection settings. For example, two null values are considered unequal under default SQL Server settings, but you can change the settings to change the semantics. LINQ to SQL does not consider server settings when it translates queries.

A comparison with the literal null is translated to the appropriate SQL version (is null or is not null).

The value of null in collation is defined by SQL Server. LINQ to SQL does not change the collation.

Aggregates

The Standard Query Operator aggregate method Sum evaluates to zero for an empty sequence or for a sequence that contains only nulls. In LINQ to SQL, the semantics of SQL are left unchanged, and Sum evaluates to null instead of zero for an empty sequence or for a sequence that contains only nulls.

SQL limitations on intermediate results apply to aggregates in LINQ to SQL. The Sum of 32-bit integer quantities is not computed by using 64-bit results. Overflow might occur for a LINQ to SQL translation of Sum, even if the Standard Query Operator implementation does not cause an overflow for the corresponding in-memory sequence.

Likewise, the LINQ to SQL translation of Average of integer values is computed as an integer, not as a double.

Entity Arguments

LINQ to SQL enables entity types to be used in the GroupBy and OrderBy methods. In the translation of these operators, the use of an argument of a type is considered to be the equivalent to specifying all members of that type. For example, the following code is equivalent:

C#
db.Customers.GroupBy(c => c);
db.Customers.GroupBy(c => new { c.CustomerID, c.ContactName });

Equatable / Comparable Arguments

Equality of arguments is required in the implementation of the following methods:

LINQ to SQL supports equality and comparison for flat arguments, but not for arguments that are or contain sequences. A flat argument is a type that can be mapped to a SQL row. A projection of one or more entity types that can be statically determined not to contain a sequence is considered a flat argument.

Following are examples of flat arguments:

C#
db.Customers.Select(c => c);
db.Customers.Select(c => new { c.CustomerID, c.City });
db.Orders.Select(o => new { o.OrderID, o.Customer.City });
db.Orders.Select(o => new { o.OrderID, o.Customer });	

The following are examples of non-flat (hierarchical) arguments.

C#
// In the following line, c.Orders is a sequence.
db.Customers.Select(c => new { c.CustomerID, c.Orders });
// In the following line, the result has a sequence.
db.Customers.GroupBy(c => c.City);

Visual Basic Function Translation

The following helper functions that are used by the Visual Basic compiler are translated to corresponding SQL operators and functions:

  • CompareString

  • DateTime.Compare

  • Decimal.Compare

  • IIf (in Microsoft.VisualBasic.Interaction)

Conversion methods:

ToBoolean ToSByte ToByte ToChar
ToCharArrayRankOne ToDate ToDecimal ToDouble
ToInteger ToUInteger ToLong ToULong
ToShort ToUShort ToSingle ToString

Inheritance Support

Inheritance Mapping Restrictions

For more information, see How to: Map Inheritance Hierarchies.

Inheritance in Queries

C# casts are supported only in projection. Casts that are used elsewhere are not translated and are ignored. Aside from SQL function names, SQL really only performs the equivalent of the common language runtime (CLR) Convert. That is, SQL can change the value of one type to another. There is no equivalent of CLR cast because there is no concept of reinterpreting the same bits as those of another type. That is why a C# cast works only locally. It is not remoted.

The operators, is and as, and the GetType method are not restricted to the Select operator. They can be used in other query operators also.

SQL Server 2008 Support

Starting with the .NET Framework 3.5 SP1, LINQ to SQL supports mapping to new date and time types introduced with SQL Server 2008. But, there are some limitations to the LINQ to SQL query operators that you can use when operating against values mapped to these new types.

Unsupported Query Operators

The following query operators are not supported on values mapped to the new SQL Server date and time types: DATETIME2, DATE, TIME, and DATETIMEOFFSET.

  • Aggregate

  • Average

  • LastOrDefault

  • OfType

  • Sum

For more information about mapping to these SQL Server date and time types, see SQL-CLR Type Mapping.

SQL Server 2005 Support

LINQ to SQL does not support the following SQL Server 2005 features:

  • Stored procedures written for SQL CLR.

  • User-defined type.

  • XML query features.

SQL Server 2000 Support

The following SQL Server 2000 limitations (compared to Microsoft SQL Server 2005) affect LINQ to SQL support.

Cross Apply and Outer Apply Operators

These operators are not available in SQL Server 2000. LINQ to SQL tries a series of rewrites to replace them with appropriate joins.

Cross Apply and Outer Apply are generated for relationship navigations. The set of queries for which such rewrites are possible is not well defined. For this reason, the minimal set of queries that is supported for SQL Server 2000 is the set that does not involve relationship navigation.

text / ntext

Data types text / ntext cannot be used in certain query operations against varchar(max) / nvarchar(max), which are supported by Microsoft SQL Server 2005.

No resolution is available for this limitation. Specifically, you cannot use Distinct() on any result that contains members that are mapped to text or ntext columns.

Behavior Triggered by Nested Queries

SQL Server 2000 (through SP4) binder has some idiosyncrasies that are triggered by nested queries. The set of SQL queries that triggers these idiosyncrasies is not well defined. For this reason, you cannot define the set of LINQ to SQL queries that might cause SQL Server exceptions.

Skip and Take Operators

Take and Skip have certain limitations when they are used in queries against SQL Server 2000. For more information, see the "Skip and Take Exceptions in SQL Server 2000" entry in Troubleshooting.

Object Materialization

Materialization creates CLR objects from rows that are returned by one or more SQL queries.

  • The following calls are executed locally as a part of materialization:

    • Constructors

    • ToString methods in projections

    • Type casts in projections

  • Methods that follow the AsEnumerable method are executed locally. This method does not cause immediate execution.

  • You can use a struct as the return type of a query result or as a member of the result type. Entities are required to be classes. Anonymous types are materialized as class instances, but named structs (non-entities) can be used in projection.

  • A member of the return type of a query result can be of type IQueryable<T>. It is materialized as a local collection.

  • The following methods cause the immediate materialization of the sequence that the methods are applied to:

See also

Samples

This topic provides links to the Visual Basic and C# solutions that contain LINQ to SQL sample code.

In This Section

Visual Basic version of the SampleQueries solution
Sample Queries (Visual Basic)

C# version of the SampleQueries solution
Sample Queries

Follow these steps to find additional examples of LINQ to SQL code and applications:

See also

 

Source/Reference


©sideway

ID: 201000023 Last Updated: 10/23/2020 Revision: 0 Ref:

close

References

  1. Active Server Pages,  , http://msdn.microsoft.com/en-us/library/aa286483.aspx
  2. ASP Overview,  , http://msdn.microsoft.com/en-us/library/ms524929%28v=vs.90%29.aspx
  3. ASP Best Practices,  , http://technet.microsoft.com/en-us/library/cc939157.aspx
  4. ASP Built-in Objects,  , http://msdn.microsoft.com/en-us/library/ie/ms524716(v=vs.90).aspx
  5. Response Object,  , http://msdn.microsoft.com/en-us/library/ms525405(v=vs.90).aspx
  6. Request Object,  , http://msdn.microsoft.com/en-us/library/ms524948(v=vs.90).aspx
  7. Server Object (IIS),  , http://msdn.microsoft.com/en-us/library/ms525541(v=vs.90).aspx
  8. Application Object (IIS),  , http://msdn.microsoft.com/en-us/library/ms525360(v=vs.90).aspx
  9. Session Object (IIS),  , http://msdn.microsoft.com/en-us/library/ms524319(8v=vs.90).aspx
  10. ASPError Object,  , http://msdn.microsoft.com/en-us/library/ms524942(v=vs.90).aspx
  11. ObjectContext Object (IIS),  , http://msdn.microsoft.com/en-us/library/ms525667(v=vs.90).aspx
  12. Debugging Global.asa Files,  , http://msdn.microsoft.com/en-us/library/aa291249(v=vs.71).aspx
  13. How to: Debug Global.asa files,  , http://msdn.microsoft.com/en-us/library/ms241868(v=vs.80).aspx
  14. Calling COM Components from ASP Pages,  , http://msdn.microsoft.com/en-us/library/ms524620(v=VS.90).aspx
  15. IIS ASP Scripting Reference,  , http://msdn.microsoft.com/en-us/library/ms524664(v=vs.90).aspx
  16. ASP Keywords,  , http://msdn.microsoft.com/en-us/library/ms524672(v=vs.90).aspx
  17. Creating Simple ASP Pages,  , http://msdn.microsoft.com/en-us/library/ms524741(v=vs.90).aspx
  18. Including Files in ASP Applications,  , http://msdn.microsoft.com/en-us/library/ms524876(v=vs.90).aspx
  19. ASP Overview,  , http://msdn.microsoft.com/en-us/library/ms524929(v=vs.90).aspx
  20. FileSystemObject Object,  , http://msdn.microsoft.com/en-us/library/z9ty6h50(v=vs.84).aspx
  21. http://msdn.microsoft.com/en-us/library/windows/desktop/ms675944(v=vs.85).aspx,  , ADO Object Model
  22. ADO Fundamentals,  , http://msdn.microsoft.com/en-us/library/windows/desktop/ms680928(v=vs.85).aspx
close

Latest Updated LinksValid XHTML 1.0 Transitional Valid CSS!Nu Html Checker Firefox53 Chromena IExplorerna
IMAGE

Home 5

Business

Management

HBR 3

Information

Recreation

Hobbies 8

Culture

Chinese 1097

English 339

Reference 79

Computer

Hardware 249

Software

Application 213

Digitization 32

Latex 52

Manim 205

KB 1

Numeric 19

Programming

Web 289

Unicode 504

HTML 66

CSS 65

SVG 46

ASP.NET 270

OS 429

DeskTop 7

Python 72

Knowledge

Mathematics

Formulas 8

Algebra 84

Number Theory 206

Trigonometry 31

Geometry 34

Coordinate Geometry 2

Calculus 67

Complex Analysis 21

Engineering

Tables 8

Mechanical

Mechanics 1

Rigid Bodies

Statics 92

Dynamics 37

Fluid 5

Fluid Kinematics 5

Control

Process Control 1

Acoustics 19

FiniteElement 2

Natural Sciences

Matter 1

Electric 27

Biology 1

Geography 1


Copyright © 2000-2024 Sideway . All rights reserved Disclaimers last modified on 06 September 2019