SQL Alchemy 0.7.3 Manual
SQL Alchemy 0.7.3 Manual
Release 0.7.3
Mike Bayer
CONTENTS
Overview 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Documentation Overview . . . . . . . . . . . . . . . 1.3 Code Examples . . . . . . . . . . . . . . . . . . . . . 1.4 Installation Guide . . . . . . . . . . . . . . . . . . . 1.4.1 Supported Platforms . . . . . . . . . . . . . 1.4.2 Supported Installation Methods . . . . . . . . 1.4.3 Install via easy_install or pip . . . . . . . . . 1.4.4 Installing using setup.py . . . . . . . . . . . 1.4.5 Installing the C Extensions . . . . . . . . . . 1.4.6 Installing on Python 3 . . . . . . . . . . . . . 1.4.7 Installing a Database API . . . . . . . . . . . 1.4.8 Checking the Installed SQLAlchemy Version 1.5 0.6 to 0.7 Migration . . . . . . . . . . . . . . . . . . SQLAlchemy ORM 2.1 Object Relational Tutorial . . . . . . . . . . . . 2.1.1 Introduction . . . . . . . . . . . . . . . 2.1.2 Version Check . . . . . . . . . . . . . . 2.1.3 Connecting . . . . . . . . . . . . . . . 2.1.4 Declare a Mapping . . . . . . . . . . . 2.1.5 Create an Instance of the Mapped Class 2.1.6 Creating a Session . . . . . . . . . . . 2.1.7 Adding New Objects . . . . . . . . . . 2.1.8 Rolling Back . . . . . . . . . . . . . . 2.1.9 Querying . . . . . . . . . . . . . . . . Common Filter Operators . . . . . . . . Returning Lists and Scalars . . . . . . . Using Literal SQL . . . . . . . . . . . . Counting . . . . . . . . . . . . . . . . . 2.1.10 Building a Relationship . . . . . . . . . 2.1.11 Working with Related Objects . . . . . 2.1.12 Querying with Joins . . . . . . . . . . . Using Aliases . . . . . . . . . . . . . . . Using Subqueries . . . . . . . . . . . . . Selecting Entities from Subqueries . . . . Using EXISTS . . . . . . . . . . . . . . Common Relationship Operators . . . . . 2.1.13 Eager Loading . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
3 3 4 4 4 4 4 5 5 5 5 5 5 6 7 7 7 7 7 8 10 11 12 14 15 17 18 19 22 22 24 25 26 27 27 28 29 30
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
2.2
2.3
2.4
Subquery Load . . . . . . . . . . . . . . . . . . . . . . . . Joined Load . . . . . . . . . . . . . . . . . . . . . . . . . . Explicit Join + Eagerload . . . . . . . . . . . . . . . . . . . 2.1.14 Deleting . . . . . . . . . . . . . . . . . . . . . . . . . . . Conguring delete/delete-orphan Cascade . . . . . . . . . . 2.1.15 Building a Many To Many Relationship . . . . . . . . . . 2.1.16 Further Reference . . . . . . . . . . . . . . . . . . . . . . Mapper Conguration . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Classical Mappings . . . . . . . . . . . . . . . . . . . . . 2.2.2 Customizing Column Properties . . . . . . . . . . . . . . Naming Columns Distinctly from Attribute Names . . . . . Mapping Multiple Columns to a Single Attribute . . . . . . Using column_property for column level options . . . . . . Mapping a Subset of Table Columns . . . . . . . . . . . . . 2.2.3 Deferred Column Loading . . . . . . . . . . . . . . . . . Column Deferral API . . . . . . . . . . . . . . . . . . . . . 2.2.4 SQL Expressions as Mapped Attributes . . . . . . . . . . Alternatives to column_property() . . . . . . . . . . . . . . 2.2.5 Changing Attribute Behavior . . . . . . . . . . . . . . . . Simple Validators . . . . . . . . . . . . . . . . . . . . . . . Using Descriptors . . . . . . . . . . . . . . . . . . . . . . . Synonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . Custom Comparators . . . . . . . . . . . . . . . . . . . . . 2.2.6 Composite Column Types . . . . . . . . . . . . . . . . . Tracking In-Place Mutations on Composites . . . . . . . . . Redening Comparison Operations for Composites . . . . . 2.2.7 Mapping a Class against Multiple Tables . . . . . . . . . . 2.2.8 Mapping a Class against Arbitrary Selects . . . . . . . . . 2.2.9 Multiple Mappers for One Class . . . . . . . . . . . . . . 2.2.10 Multiple Persistence Mappers for One Class . . . . . . . 2.2.11 Constructors and Object Initialization . . . . . . . . . . . 2.2.12 Class Mapping API . . . . . . . . . . . . . . . . . . . . . Relationship Conguration . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Basic Relational Patterns . . . . . . . . . . . . . . . . . . One To Many . . . . . . . . . . . . . . . . . . . . . . . . . Many To One . . . . . . . . . . . . . . . . . . . . . . . . . One To One . . . . . . . . . . . . . . . . . . . . . . . . . . Many To Many . . . . . . . . . . . . . . . . . . . . . . . . Association Object . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Adjacency List Relationships . . . . . . . . . . . . . . . . Self-Referential Query Strategies . . . . . . . . . . . . . . Conguring Self-Referential Eager Loading . . . . . . . . . 2.3.3 Linking Relationships with Backref . . . . . . . . . . . . Backref Arguments . . . . . . . . . . . . . . . . . . . . . . One Way Backrefs . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Specifying Alternate Join Conditions to relationship() . . . Self-Referential Many-to-Many Relationship . . . . . . . . Specifying Foreign Keys . . . . . . . . . . . . . . . . . . . Building Query-Enabled Properties . . . . . . . . . . . . . Multiple Relationships against the Same Parent/Child . . . . 2.3.5 Rows that point to themselves / Mutually Dependent Rows 2.3.6 Mutable Primary Keys / Update Cascades . . . . . . . . . 2.3.7 Relationships API . . . . . . . . . . . . . . . . . . . . . . Collection Conguration and Techniques . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30 30 31 32 33 35 38 38 39 40 40 41 42 43 44 45 46 48 48 48 49 51 52 54 56 56 56 57 58 58 59 59 67 67 68 68 69 69 70 71 72 74 74 76 77 78 80 81 81 82 82 83 84 90
ii
2.5
2.6
Working with Large Collections . . . . . . . . . . . . . . . . Dynamic Relationship Loaders . . . . . . . . . . . . . . . . . . Setting Noload . . . . . . . . . . . . . . . . . . . . . . . . . . Using Passive Deletes . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Customizing Collection Access . . . . . . . . . . . . . . . . . Dictionary Collections . . . . . . . . . . . . . . . . . . . . . . Custom Collection Implementations . . . . . . . . . . . . . . . Collections API . . . . . . . . . . . . . . . . . . . . . . . . . . Mapping Class Inheritance Hierarchies . . . . . . . . . . . . . . . . . 2.5.1 Joined Table Inheritance . . . . . . . . . . . . . . . . . . . . Basic Control of Which Tables are Queried . . . . . . . . . . . Advanced Control of Which Tables are Queried . . . . . . . . . Creating Joins to Specic Subtypes . . . . . . . . . . . . . . . 2.5.2 Single Table Inheritance . . . . . . . . . . . . . . . . . . . . 2.5.3 Concrete Table Inheritance . . . . . . . . . . . . . . . . . . . Concrete Inheritance with Declarative . . . . . . . . . . . . . . 2.5.4 Using Relationships with Inheritance . . . . . . . . . . . . . . Relationships with Concrete Inheritance . . . . . . . . . . . . . 2.5.5 Using Inheritance with Declarative . . . . . . . . . . . . . . . Using the Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 What does the Session do ? . . . . . . . . . . . . . . . . . . . 2.6.2 Getting a Session . . . . . . . . . . . . . . . . . . . . . . . . Adding Additional Conguration to an Existing sessionmaker() Creating Ad-Hoc Session Objects with Alternate Arguments . . 2.6.3 Using the Session . . . . . . . . . . . . . . . . . . . . . . . . Quickie Intro to Object States . . . . . . . . . . . . . . . . . . Session Frequently Asked Questions . . . . . . . . . . . . . . . Querying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding New or Existing Items . . . . . . . . . . . . . . . . . . Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flushing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Committing . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rolling Back . . . . . . . . . . . . . . . . . . . . . . . . . . . Expunging . . . . . . . . . . . . . . . . . . . . . . . . . . . . Closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Refreshing / Expiring . . . . . . . . . . . . . . . . . . . . . . . Session Attributes . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 Cascades . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.5 Managing Transactions . . . . . . . . . . . . . . . . . . . . . Using SAVEPOINT . . . . . . . . . . . . . . . . . . . . . . . Autocommit Mode . . . . . . . . . . . . . . . . . . . . . . . . Enabling Two-Phase Commit . . . . . . . . . . . . . . . . . . 2.6.6 Embedding SQL Insert/Update Expressions into a Flush . . . 2.6.7 Using SQL Expressions with Sessions . . . . . . . . . . . . . 2.6.8 Joining a Session into an External Transaction . . . . . . . . . 2.6.9 Contextual/Thread-local Sessions . . . . . . . . . . . . . . . Creating a Thread-local Context . . . . . . . . . . . . . . . . . Lifespan of a Contextual Session . . . . . . . . . . . . . . . . . Contextual Session API . . . . . . . . . . . . . . . . . . . . . 2.6.10 Partitioning Strategies . . . . . . . . . . . . . . . . . . . . . . Vertical Partitioning . . . . . . . . . . . . . . . . . . . . . . . Horizontal Partitioning . . . . . . . . . . . . . . . . . . . . . . 2.6.11 Sessions API . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90 90 91 91 91 92 94 97 101 102 103 105 105 107 107 109 109 109 111 111 111 111 112 112 113 113 113 115 115 116 118 119 120 120 121 121 121 122 122 124 125 125 127 127 128 128 129 129 130 131 132 132 133 133
iii
Session and sessionmaker() . . . . . . . . . . . . . . . . . . . . . Session Utilites . . . . . . . . . . . . . . . . . . . . . . . . . . . . Attribute and State Management Utilities . . . . . . . . . . . . . . 2.7 Querying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 The Query Object . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2 ORM-Specic Query Constructs . . . . . . . . . . . . . . . . . . 2.8 Relationship Loading Techniques . . . . . . . . . . . . . . . . . . . . . . 2.8.1 Using Loader Strategies: Lazy Loading, Eager Loading . . . . . . 2.8.2 The Zen of Eager Loading . . . . . . . . . . . . . . . . . . . . . 2.8.3 What Kind of Loading to Use ? . . . . . . . . . . . . . . . . . . . 2.8.4 Routing Explicit Joins/Statements into Eagerly Loaded Collections 2.8.5 Relation Loader API . . . . . . . . . . . . . . . . . . . . . . . . 2.9 ORM Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Attribute Events . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 Mapper Events . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.3 Instance Events . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.4 Session Events . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.5 Instrumentation Events . . . . . . . . . . . . . . . . . . . . . . . 2.9.6 Alternate Class Instrumentation . . . . . . . . . . . . . . . . . . 2.10 ORM Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.1 Association Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . Simplifying Scalar Collections . . . . . . . . . . . . . . . . . . . . Creation of New Values . . . . . . . . . . . . . . . . . . . . . . . Simplifying Association Objects . . . . . . . . . . . . . . . . . . . Proxying to Dictionary Based Collections . . . . . . . . . . . . . . Composite Association Proxies . . . . . . . . . . . . . . . . . . . Querying with Association Proxies . . . . . . . . . . . . . . . . . API Documentation . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.2 Declarative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dening Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . Accessing the MetaData . . . . . . . . . . . . . . . . . . . . . . . Conguring Relationships . . . . . . . . . . . . . . . . . . . . . . Conguring Many-to-Many Relationships . . . . . . . . . . . . . . Dening SQL Expressions . . . . . . . . . . . . . . . . . . . . . . Table Conguration . . . . . . . . . . . . . . . . . . . . . . . . . Using a Hybrid Approach with __table__ . . . . . . . . . . . . . . Mapper Conguration . . . . . . . . . . . . . . . . . . . . . . . . Inheritance Conguration . . . . . . . . . . . . . . . . . . . . . . Mixin and Custom Base Classes . . . . . . . . . . . . . . . . . . . Special Directives . . . . . . . . . . . . . . . . . . . . . . . . . . Class Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.3 Mutation Tracking . . . . . . . . . . . . . . . . . . . . . . . . . Establishing Mutability on Scalar Column Values . . . . . . . . . . Establishing Mutability on Composites . . . . . . . . . . . . . . . API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.4 Ordering List . . . . . . . . . . . . . . . . . . . . . . . . . . . . API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.5 Horizontal Sharding . . . . . . . . . . . . . . . . . . . . . . . . . API Documentation . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.6 Hybrid Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . Dening Expression Behavior Distinct from Attribute Behavior . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
133 142 142 144 144 158 160 160 162 164 165 166 169 169 171 177 179 181 182 182 183 183 184 185 186 188 189 190 193 193 193 194 194 195 195 195 196 197 197 200 206 207 207 207 210 210 213 215 216 218 218 218 219 220
iv
2.11
2.12
2.13 2.14 3
Dening Setters . . . . . . . . . . . Working with Relationships . . . . Building Custom Comparators . . . Hybrid Value Objects . . . . . . . . API Reference . . . . . . . . . . . 2.10.7 SqlSoup . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . Loading objects . . . . . . . . . . . Modifying objects . . . . . . . . . Joins . . . . . . . . . . . . . . . . Relationships . . . . . . . . . . . . Advanced Use . . . . . . . . . . . SqlSoup API . . . . . . . . . . . . Examples . . . . . . . . . . . . . . . . . . 2.11.1 Adjacency List . . . . . . . . . . 2.11.2 Associations . . . . . . . . . . . . 2.11.3 Attribute Instrumentation . . . . . 2.11.4 Beaker Caching . . . . . . . . . . 2.11.5 Directed Graphs . . . . . . . . . . 2.11.6 Dynamic Relations as Dictionaries 2.11.7 Generic Associations . . . . . . . 2.11.8 Horizontal Sharding . . . . . . . . 2.11.9 Inheritance Mappings . . . . . . . 2.11.10 Large Collections . . . . . . . . . 2.11.11 Nested Sets . . . . . . . . . . . . 2.11.12 Polymorphic Associations . . . . 2.11.13 PostGIS Integration . . . . . . . . 2.11.14 Versioned Objects . . . . . . . . . 2.11.15 Vertical Attribute Mapping . . . . 2.11.16 XML Persistence . . . . . . . . . Deprecated ORM Event Interfaces . . . . . 2.12.1 Mapper Events . . . . . . . . . . 2.12.2 Session Events . . . . . . . . . . 2.12.3 Attribute Events . . . . . . . . . . ORM Exceptions . . . . . . . . . . . . . . ORM Internals . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
221 222 223 224 225 226 226 227 228 228 229 230 231 234 234 234 234 235 236 236 236 236 237 237 237 237 237 238 239 239 240 240 243 245 246 247 255 255 255 255 256 256 257 258 258 259 260 262 263 264 265 266 267
SQLAlchemy Core 3.1 SQL Expression Language Tutorial . . . . . 3.1.1 Introduction . . . . . . . . . . . . . 3.1.2 Version Check . . . . . . . . . . . . 3.1.3 Connecting . . . . . . . . . . . . . 3.1.4 Dene and Create Tables . . . . . . 3.1.5 Insert Expressions . . . . . . . . . . 3.1.6 Executing . . . . . . . . . . . . . . 3.1.7 Executing Multiple Statements . . . 3.1.8 Connectionless / Implicit Execution 3.1.9 Selecting . . . . . . . . . . . . . . 3.1.10 Operators . . . . . . . . . . . . . . 3.1.11 Conjunctions . . . . . . . . . . . . 3.1.12 Using Text . . . . . . . . . . . . . . 3.1.13 Using Aliases . . . . . . . . . . . . 3.1.14 Using Joins . . . . . . . . . . . . . 3.1.15 Intro to Generative Selects . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
3.2
3.3
3.4
3.5
3.6
Transforming a Statement . . . . . . . . . . . . . . . . Everything Else . . . . . . . . . . . . . . . . . . . . . Bind Parameter Objects . . . . . . . . . . . . . . . . . Functions . . . . . . . . . . . . . . . . . . . . . . . . . Window Functions . . . . . . . . . . . . . . . . . . . . Unions and Other Set Operations . . . . . . . . . . . . Scalar Selects . . . . . . . . . . . . . . . . . . . . . . . Correlated Subqueries . . . . . . . . . . . . . . . . . . Ordering, Grouping, Limiting, Offset...ing... . . . . . . . 3.1.17 Inserts and Updates . . . . . . . . . . . . . . . . . . . Correlated Updates . . . . . . . . . . . . . . . . . . . . 3.1.18 Deletes . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.19 Further Reference . . . . . . . . . . . . . . . . . . . . SQL Statements and Expressions API . . . . . . . . . . . . . . 3.2.1 Functions . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Classes . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Generic Functions . . . . . . . . . . . . . . . . . . . . Engine Conguration . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Supported Databases . . . . . . . . . . . . . . . . . . 3.3.2 Engine Creation API . . . . . . . . . . . . . . . . . . 3.3.3 Database Urls . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Custom DBAPI connect() arguments . . . . . . . . . . 3.3.5 Conguring Logging . . . . . . . . . . . . . . . . . . Working with Engines and Connections . . . . . . . . . . . . . 3.4.1 Basic Usage . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Using Transactions . . . . . . . . . . . . . . . . . . . Nesting of Transaction Blocks . . . . . . . . . . . . . . 3.4.3 Understanding Autocommit . . . . . . . . . . . . . . 3.4.4 Connectionless Execution, Implicit Execution . . . . . 3.4.5 Using the Threadlocal Execution Strategy . . . . . . . 3.4.6 Connection / Engine API . . . . . . . . . . . . . . . . Connection Pooling . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Connection Pool Conguration . . . . . . . . . . . . . 3.5.2 Switching Pool Implementations . . . . . . . . . . . . 3.5.3 Using a Custom Connection Function . . . . . . . . . 3.5.4 Constructing a Pool . . . . . . . . . . . . . . . . . . . 3.5.5 Pool Event Listeners . . . . . . . . . . . . . . . . . . 3.5.6 Dealing with Disconnects . . . . . . . . . . . . . . . . Disconnect Handling - Optimistic . . . . . . . . . . . . Disconnect Handling - Pessimistic . . . . . . . . . . . . 3.5.7 API Documentation - Available Pool Implementations 3.5.8 Pooling Plain DB-API Connections . . . . . . . . . . Schema Denition Language . . . . . . . . . . . . . . . . . . 3.6.1 Describing Databases with MetaData . . . . . . . . . Accessing Tables and Columns . . . . . . . . . . . . . Creating and Dropping Database Tables . . . . . . . . . Binding MetaData to an Engine or Connection . . . . . Specifying the Schema Name . . . . . . . . . . . . . . Backend-Specic Options . . . . . . . . . . . . . . . . Column, Table, MetaData API . . . . . . . . . . . . . . 3.6.2 Reecting Database Objects . . . . . . . . . . . . . . Overriding Reected Columns . . . . . . . . . . . . . . Reecting Views . . . . . . . . . . . . . . . . . . . . . Reecting All Tables at Once . . . . . . . . . . . . . . 3.1.16
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
268 268 269 269 271 271 272 272 273 273 275 275 275 276 276 289 310 312 312 314 316 318 318 319 319 320 321 321 322 323 324 335 335 336 336 336 337 337 337 338 339 342 343 343 343 345 346 347 348 348 356 357 357 357
vi
Fine Grained Reection with Inspector . . . . . . . . . . . . . . . Column Insert/Update Defaults . . . . . . . . . . . . . . . . . . . Scalar Defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . Python-Executed Functions . . . . . . . . . . . . . . . . . . . . . SQL Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Side Defaults . . . . . . . . . . . . . . . . . . . . . . . . . Triggered Columns . . . . . . . . . . . . . . . . . . . . . . . . . . Dening Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . Default Objects API . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.4 Dening Constraints and Indexes . . . . . . . . . . . . . . . . . . Dening Foreign Keys . . . . . . . . . . . . . . . . . . . . . . . . UNIQUE Constraint . . . . . . . . . . . . . . . . . . . . . . . . . CHECK Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . Setting up Constraints when using the Declarative ORM Extension Constraints API . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.5 Customizing DDL . . . . . . . . . . . . . . . . . . . . . . . . . Controlling DDL Sequences . . . . . . . . . . . . . . . . . . . . . Custom DDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDL Expression Constructs API . . . . . . . . . . . . . . . . . . . 3.7 Column and Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Generic Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 SQL Standard Types . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 Vendor-Specic Types . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Custom Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overriding Type Compilation . . . . . . . . . . . . . . . . . . . . Augmenting Existing Types . . . . . . . . . . . . . . . . . . . . . TypeDecorator Recipes . . . . . . . . . . . . . . . . . . . . . . . . Creating New Types . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.5 Base Type API . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Event Registration . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.3 Modiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.4 Event Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.5 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Core Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.1 Connection Pool Events . . . . . . . . . . . . . . . . . . . . . . 3.9.2 Connection Events . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.3 Schema Events . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Custom SQL Constructs and Compilation Extension . . . . . . . . . . . . 3.10.1 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.2 Dialect-specic compilation rules . . . . . . . . . . . . . . . . . 3.10.3 Compiling sub-elements of a custom expression construct . . . . Cross Compiling between SQL and DDL compilers . . . . . . . . . 3.10.4 Enabling Autocommit on a Construct . . . . . . . . . . . . . . . 3.10.5 Changing the default compilation of existing constructs . . . . . . 3.10.6 Changing Compilation of Types . . . . . . . . . . . . . . . . . . 3.10.7 Subclassing Guidelines . . . . . . . . . . . . . . . . . . . . . . . 3.10.8 Further Examples . . . . . . . . . . . . . . . . . . . . . . . . . . UTC timestamp function . . . . . . . . . . . . . . . . . . . . . . GREATEST function . . . . . . . . . . . . . . . . . . . . . . . . false expression . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Expression Serializer Extension . . . . . . . . . . . . . . . . . . . . . . . 3.6.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
357 360 360 361 362 363 363 363 364 366 366 369 369 370 370 373 375 375 377 378 382 382 389 391 391 391 392 396 398 400 404 404 404 405 405 405 406 406 407 408 411 411 412 412 413 413 414 414 414 415 415 416 417 417
vii
3.12 Deprecated Event Interfaces . . . . . . . . . . . . 3.12.1 Execution, Connection and Cursor Events 3.12.2 Connection Pool Events . . . . . . . . . 3.13 Core Exceptions . . . . . . . . . . . . . . . . . . 3.14 Core Internals . . . . . . . . . . . . . . . . . . . 4 Dialects 4.1 Drizzle . . . . . . . . . . . . . . . . . . 4.1.1 Supported Versions and Features 4.1.2 Connecting . . . . . . . . . . . 4.1.3 Connection Timeouts . . . . . . 4.1.4 Storage Engines . . . . . . . . . 4.1.5 Keys . . . . . . . . . . . . . . . 4.1.6 Drizzle SQL Extensions . . . . 4.1.7 Drizzle Data Types . . . . . . . 4.1.8 MySQL-Python Notes . . . . . Connecting . . . . . . . . . . . . Character Sets . . . . . . . . . . Known Issues . . . . . . . . . . . 4.2 Firebird . . . . . . . . . . . . . . . . . . 4.2.1 Dialects . . . . . . . . . . . . . 4.2.2 Locking Behavior . . . . . . . . 4.2.3 RETURNING support . . . . . 4.2.4 kinterbasdb . . . . . . . . . . . 4.3 Informix . . . . . . . . . . . . . . . . . 4.3.1 informixdb Notes . . . . . . . . Connecting . . . . . . . . . . . . 4.4 MaxDB . . . . . . . . . . . . . . . . . . 4.4.1 Overview . . . . . . . . . . . . 4.4.2 Connecting . . . . . . . . . . . 4.4.3 Implementation Notes . . . . . sapdb.dbapi . . . . . . . . . . . . 4.5 Microsoft Access . . . . . . . . . . . . . 4.6 Microsoft SQL Server . . . . . . . . . . 4.6.1 Connecting . . . . . . . . . . . 4.6.2 Auto Increment Behavior . . . . 4.6.3 Collation Support . . . . . . . . 4.6.4 LIMIT/OFFSET Support . . . . 4.6.5 Nullability . . . . . . . . . . . . 4.6.6 Date / Time Handling . . . . . . 4.6.7 Compatibility Levels . . . . . . 4.6.8 Triggers . . . . . . . . . . . . . 4.6.9 Enabling Snapshot Isolation . . 4.6.10 Scalar Select Comparisons . . . 4.6.11 Known Issues . . . . . . . . . . 4.6.12 SQL Server Data Types . . . . . 4.6.13 PyODBC . . . . . . . . . . . . Connecting . . . . . . . . . . . . 4.6.14 mxODBC . . . . . . . . . . . . Connecting . . . . . . . . . . . . Execution Modes . . . . . . . . . 4.6.15 pymssql . . . . . . . . . . . . . Connecting . . . . . . . . . . . . Limitations . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
418 418 419 420 424 435 435 435 435 435 436 436 436 437 440 440 440 440 440 441 441 441 441 442 442 442 442 442 443 443 444 444 444 444 444 445 445 445 446 446 446 446 447 447 447 450 450 451 451 451 451 452 452
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
viii
4.7
4.8
zxjdbc Notes . . . . . . . . . . . . . JDBC Driver . . . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . 4.6.17 AdoDBAPI . . . . . . . . . . . . . . MySQL . . . . . . . . . . . . . . . . . . . . . 4.7.1 Supported Versions and Features . . . 4.7.2 Connecting . . . . . . . . . . . . . . 4.7.3 Connection Timeouts . . . . . . . . . 4.7.4 Storage Engines . . . . . . . . . . . . 4.7.5 Case Sensitivity and Table Reection 4.7.6 Keys . . . . . . . . . . . . . . . . . . 4.7.7 SQL Mode . . . . . . . . . . . . . . 4.7.8 MySQL SQL Extensions . . . . . . . 4.7.9 CAST Support . . . . . . . . . . . . 4.7.10 MySQL Specic Index Options . . . Index Length . . . . . . . . . . . . . . 4.7.11 MySQL Data Types . . . . . . . . . . 4.7.12 MySQL-Python Notes . . . . . . . . Connecting . . . . . . . . . . . . . . . Character Sets . . . . . . . . . . . . . Known Issues . . . . . . . . . . . . . . 4.7.13 OurSQL Notes . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . Character Sets . . . . . . . . . . . . . 4.7.14 pymysql Notes . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . MySQL-Python Compatibility . . . . . 4.7.15 MySQL-Connector Notes . . . . . . . Connecting . . . . . . . . . . . . . . . 4.7.16 pyodbc Notes . . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . Limitations . . . . . . . . . . . . . . . 4.7.17 zxjdbc Notes . . . . . . . . . . . . . JDBC Driver . . . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . Character Sets . . . . . . . . . . . . . Oracle . . . . . . . . . . . . . . . . . . . . . 4.8.1 Connect Arguments . . . . . . . . . . 4.8.2 Auto Increment Behavior . . . . . . . 4.8.3 Identier Casing . . . . . . . . . . . 4.8.4 Unicode . . . . . . . . . . . . . . . . 4.8.5 LIMIT/OFFSET Support . . . . . . . 4.8.6 ON UPDATE CASCADE . . . . . . 4.8.7 Oracle 8 Compatibility . . . . . . . . 4.8.8 Synonym/DBLINK Reection . . . . 4.8.9 Oracle Data Types . . . . . . . . . . 4.8.10 cx_Oracle Notes . . . . . . . . . . . Driver . . . . . . . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . Unicode . . . . . . . . . . . . . . . . . LOB Objects . . . . . . . . . . . . . . Two Phase Transaction Support . . . . Precision Numerics . . . . . . . . . . . 4.8.11 zxjdbc Notes . . . . . . . . . . . . .
4.6.16
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
452 452 452 452 452 453 453 453 453 453 454 454 455 455 455 455 456 465 465 465 465 466 466 466 466 466 466 467 467 467 467 467 467 467 467 468 468 468 468 468 469 469 469 469 470 470 472 473 473 473 473 473 474 474
ix
JDBC Driver . . . . . . . . . . . . . . . . . . . . . . . . . PostgreSQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Sequences/SERIAL . . . . . . . . . . . . . . . . . . . . . 4.9.2 Transaction Isolation Level . . . . . . . . . . . . . . . . . 4.9.3 Remote / Cross-Schema Table Introspection . . . . . . . . 4.9.4 INSERT/UPDATE...RETURNING . . . . . . . . . . . . . 4.9.5 Postgresql-Specic Index Options . . . . . . . . . . . . . Partial Indexes . . . . . . . . . . . . . . . . . . . . . . . . Operator Classes . . . . . . . . . . . . . . . . . . . . . . . Index Types . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.6 PostgreSQL Data Types . . . . . . . . . . . . . . . . . . . 4.9.7 psycopg2 Notes . . . . . . . . . . . . . . . . . . . . . . . Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . . . . . . . . . . . . Per-Statement/Connection Execution Options . . . . . . . . Unicode . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transactions . . . . . . . . . . . . . . . . . . . . . . . . . Client Encoding . . . . . . . . . . . . . . . . . . . . . . . Transaction Isolation Level . . . . . . . . . . . . . . . . . . NOTICE logging . . . . . . . . . . . . . . . . . . . . . . . 4.9.8 py-postgresql Notes . . . . . . . . . . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.9 pg8000 Notes . . . . . . . . . . . . . . . . . . . . . . . . Connecting . . . . . . . . . . . . . . . . . . . . . . . . . . Unicode . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.10 zxjdbc Notes . . . . . . . . . . . . . . . . . . . . . . . . JDBC Driver . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 SQLite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.1 Date and Time Types . . . . . . . . . . . . . . . . . . . . 4.10.2 Auto Incrementing Behavior . . . . . . . . . . . . . . . . 4.10.3 Transaction Isolation Level . . . . . . . . . . . . . . . . . 4.10.4 SQLite Data Types . . . . . . . . . . . . . . . . . . . . . 4.10.5 Pysqlite . . . . . . . . . . . . . . . . . . . . . . . . . . . Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connect Strings . . . . . . . . . . . . . . . . . . . . . . . . Compatibility with sqlite3 native date and datetime types . Threading/Pooling Behavior . . . . . . . . . . . . . . . . . Unicode . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Sybase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11.1 python-sybase notes . . . . . . . . . . . . . . . . . . . . . Unicode Support . . . . . . . . . . . . . . . . . . . . . . . 4.11.2 pyodbc notes . . . . . . . . . . . . . . . . . . . . . . . . Unicode Support . . . . . . . . . . . . . . . . . . . . . . . 4.11.3 mxodbc notes . . . . . . . . . . . . . . . . . . . . . . . . 4.9 5 Indices and tables
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
474 474 474 475 475 475 476 476 476 476 476 479 480 480 480 480 480 481 481 481 481 481 481 481 481 482 482 482 482 482 482 483 483 484 485 485 485 486 487 487 487 487 487 488 488 489 491 493
Full table of contents. For a high level overview of all documentation, see index_toplevel.
CONTENTS
CONTENTS
CHAPTER
ONE
OVERVIEW
1.1 Overview
The SQLAlchemy SQL Toolkit and Object Relational Mapper is a comprehensive set of tools for working with databases and Python. It has several distinct areas of functionality which can be used individually or combined together. Its major components are illustrated in below, with component dependencies organized into layers:
Above, the two most signicant front-facing portions of SQLAlchemy are the Object Relational Mapper and the SQL Expression Language. SQL Expressions can be used independently of the ORM. When using the ORM, the SQL Expression language remains part of the public facing API as it is used within object-relational congurations and queries.
Chapter 1. Overview
pip - pip is an installer that rides on top of setuptools or distribute, replacing the usage of easy_install. It is often preferred for its simpler mode of usage.
Chapter 1. Overview
CHAPTER
TWO
SQLALCHEMY ORM
2.1 Object Relational Tutorial
2.1.1 Introduction
The SQLAlchemy Object Relational Mapper presents a method of associating user-dened Python classes with database tables, and instances of those classes (objects) with rows in their corresponding tables. It includes a system that transparently synchronizes all changes in state between objects and their related rows, called a unit of work, as well as a system for expressing database queries in terms of the user dened classes and their dened relationships between each other. The ORM is in contrast to the SQLAlchemy Expression Language, upon which the ORM is constructed. Whereas the SQL Expression Language, introduced in SQL Expression Language Tutorial, presents a system of representing the primitive constructs of the relational database directly without opinion, the ORM presents a high level and abstracted pattern of usage, which itself is an example of applied usage of the Expression Language. While there is overlap among the usage patterns of the ORM and the Expression Language, the similarities are more supercial than they may at rst appear. One approaches the structure and content of data from the perspective of a user-dened domain model which is transparently persisted and refreshed from its underlying storage model. The other approaches it from the perspective of literal schema and SQL expression representations which are explicitly composed into messages consumed individually by the database. A successful application may be constructed using the Object Relational Mapper exclusively. In advanced situations, an application constructed with the ORM may make occasional usage of the Expression Language directly in certain areas where specic database interactions are required. The following tutorial is in doctest format, meaning each >>> line represents something you can type at a Python command prompt, and the following text represents the expected return value.
2.1.3 Connecting
For this tutorial we will use an in-memory-only SQLite database. To connect we use create_engine():
>>> from sqlalchemy import create_engine >>> engine = create_engine(sqlite:///:memory:, echo=True) The echo ag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Pythons standard logging module. With it enabled, well see all the generated SQL produced. If you are working through this tutorial and want less output generated, set it to False. This tutorial will format the SQL behind a popup window so it doesnt get in our way; just click the SQL links to see whats being generated. The return value of create_engine() is an instance of Engine, and it represents the core interface to the database, adapted through a dialect that handles the details of the database and DBAPI in use. In this case the SQLite dialect will interpret instructions to the Python built-in sqlite3 module. The Engine has not actually tried to connect to the database yet; that happens only the rst time it is asked to perform a task against the database. We can illustrate this by asking it to perform a simple SELECT statement: >>> engine.execute("select 1").scalar() select 1 () 1 As the Engine.execute() method is called, the Engine establishes a connection to the SQLite database, which is then used to emit the SQL. The connection is then returned to an internal connection pool where it will be reused on subsequent statement executions. While we illustrate direct usage of the Engine here, this isnt typically necessary when using the ORM, where the Engine, once created, is used behind the scenes by the ORM as well see shortly.
self.fullname = fullname self.password = password def __repr__(self): return "<User(%s,%s, %s)>" % (self.name, self.fullname, self.password)
The above User class establishes details about the table being mapped, including the name of the table denoted by the __tablename__ attribute, a set of columns id, name, fullname and password, where the id column will also be the primary key of the table. While its certainly possible that some database tables dont have primary key columns (as is also the case with views, which can also be mapped), the ORM in order to actually map to a particular table needs there to be at least one column denoted as a primary key column; multiple-column, i.e. composite, primary keys are of course entirely feasible as well. We dene a constructor via __init__() and also a __repr__() method - both are optional. The class of course can have any number of other methods and attributes as required by the application, as its basically just a plain Python class. Inheriting from Base is also only a requirement of the declarative congurational system, which itself is optional and relatively open ended; at its core, the SQLAlchemy ORM only requires that a class be a so-called new style class, that is, it inherits from object in Python 2, in order to be mapped. All classes in Python 3 are new style classes. The Non Opinionated Philosophy In our User mapping example, it was required that we identify the name of the table in use, as well as the names and characteristics of all columns which we care about, including which column or columns represent the primary key, as well as some basic information about the types in use. SQLAlchemy never makes assumptions about these decisions - the developer must always be explicit about specic conventions in use. However, that doesnt mean the task cant be automated. While this tutorial will keep things explicit, developers are encouraged to make use of helper functions as well as Declarative Mixins to automate their tasks in large scale applications. The section Mixin and Custom Base Classes introduces many of these techniques. With our User class constructed via the Declarative system, we have dened information about our table, known as table metadata, as well as a user-dened class which is linked to this table, known as a mapped class. Declarative has provided for us a shorthand system for what in SQLAlchemy is called a Classical Mapping, which species these two units separately and is discussed in Classical Mappings. The table is actually represented by a datastructure known as Table, and the mapping represented by a mapper object generated by a function called mapper(). Declarative performs both of these steps for us, making available the Table it has created via the __table__ attribute: >>> User.__table__ Table(users, MetaData(None), Column(id, Integer(), table=<users>, primary_key=True, nullable=False), Column(name, String(), table=<users>), Column(fullname, String(), table=<users>), Column(password, String(), table=<users>), schema=None) and while rarely needed, making available the mapper() object via the __mapper__ attribute: >>> User.__mapper__ <Mapper at 0x...; User> The Declarative base class also contains a catalog of all the Table objects that have been dened called MetaData, available via the .metadata attribute. In this example, we are dening new tables that have yet to be created in our SQLite database, so one helpful feature the MetaData object offers is the ability to issue CREATE TABLE statements to the database for all tables that dont yet exist. We illustrate this by calling the MetaData.create_all() method, passing in our Engine as a source of database connectivity. We will see that special commands are rst emitted to check for the presence of the users table, and following that the actual CREATE TABLE statement: >>> Base.metadata.create_all(engine) 2.1. Object Relational Tutorial 9
PRAGMA table_info("users") () CREATE TABLE users ( id INTEGER NOT NULL, name VARCHAR, fullname VARCHAR, password VARCHAR, PRIMARY KEY (id) ) () COMMIT Minimal Table Descriptions vs. Full Descriptions Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated without a length; on SQLite and Postgresql, this is a valid datatype, but on others, its not allowed. So if running this tutorial on one of those databases, and you wish to use SQLAlchemy to issue CREATE TABLE, a length may be provided to the String type as below: Column(String(50)) The length eld on String, as well as similar precision/scale elds available on Integer, Numeric, etc. are not referenced by SQLAlchemy other than when creating tables. Additionally, Firebird and Oracle require sequences to generate new primary key identiers, and SQLAlchemy doesnt generate or assume these without being instructed. For that, you use the Sequence construct: from sqlalchemy import Sequence Column(Integer, Sequence(user_id_seq), primary_key=True) A full, foolproof Table generated via our declarative mapping is therefore: class User(Base): __tablename__ = users id = Column(Integer, Sequence(user_id_seq), primary_key=True) name = Column(String(50)) fullname = Column(String(50)) password = Column(String(12)) def __init__(self, name, fullname, password): self.name = name self.fullname = fullname self.password = password def __repr__(self): return "<User(%s,%s, %s)>" % (self.name, self.fullname, self.password) We include this more verbose table denition separately to highlight the difference between a minimal construct geared primarily towards in-Python usage only, versus one that will be used to emit CREATE TABLE statements on a particular set of backends with more stringent requirements.
10
edspassword >>> str(ed_user.id) None The id attribute, which while not dened by our __init__() method, exists with a value of None on our User instance due to the id column we declared in our mapping. By default, the ORM creates class attributes for all columns present in the table being mapped. These class attributes exist as Python descriptors, and dene instrumentation for the mapped class. The functionality of this instrumentation includes the ability to re on change events, track modications, and to automatically load new data from the database when needed. Since we have not yet told SQLAlchemy to persist Ed Jones within the database, its id is None. When we persist the object later, this attribute will be populated with a newly generated value. The default __init__() method Note that in our User example we supplied an __init__() method, which receives name, fullname and password as positional arguments. The Declarative system supplies for us a default constructor if one is not already present, which accepts keyword arguments of the same name as that of the mapped attributes. Below we dene User without specifying a constructor: class User(Base): __tablename__ = users id = Column(Integer, primary_key=True) name = Column(String) fullname = Column(String) password = Column(String) Our User class above will make usage of the default constructor, and provide id, name, fullname, and password as keyword arguments: u1 = User(name=ed, fullname=Ed Jones, password=foobar)
This custom-made Session class will create new Session objects which are bound to our database. Other transactional characteristics may be dened when calling sessionmaker() as well; these are described in a later chapter. Then, whenever you need to have a conversation with the database, you instantiate a Session: >>> session = Session() The above Session is associated with our SQLite-enabled Engine, but it hasnt opened any connections yet. When its rst used, it retrieves a connection from a pool of connections maintained by the Engine, and holds onto it until we commit all changes and/or close the session object.
11
Session Creational Patterns The business of acquiring a Session has a good deal of variety based on the variety of types of applications and frameworks out there. Keep in mind the Session is just a workspace for your objects, local to a particular database connection - if you think of an application thread as a guest at a dinner party, the Session is the guests plate and the objects it holds are the food (and the database...the kitchen?)! Hints on how Session is integrated into an application are at Session Frequently Asked Questions.
12
... ...
Also, Ed has already decided his password isnt too secure, so lets change it: >>> ed_user.password = f8s7ccs The Session is paying attention. It knows, for example, that Ed Jones has been modied: >>> session.dirty IdentitySet([<User(ed,Ed Jones, f8s7ccs)>]) and that three new User objects are pending: >>> session.new IdentitySet([<User(wendy,Wendy Williams, foobar)>, <User(mary,Mary Contrary, xxg527)>, <User(fred,Fred Flinstone, blah)>]) We tell the Session that wed like to issue all remaining changes to the database and commit the transaction, which has been in progress throughout. We do this via commit(): >>> session.commit() UPDATE users SET password=? WHERE users.id = (f8s7ccs, 1) INSERT INTO users (name, fullname, password) (wendy, Wendy Williams, foobar) INSERT INTO users (name, fullname, password) (mary, Mary Contrary, xxg527) INSERT INTO users (name, fullname, password) (fred, Fred Flinstone, blah) COMMIT ? VALUES (?, ?, ?) VALUES (?, ?, ?) VALUES (?, ?, ?)
commit() ushes whatever remaining changes remain to the database, and commits the transaction. The connection resources referenced by the session are now returned to the connection pool. Subsequent operations with this session will occur in a new transaction, which will again re-acquire connection resources when rst needed. If we look at Eds id attribute, which earlier was None, it now has a value: >>> ed_user.id BEGIN (implicit) SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.id = ? (1,) 1 After the Session inserts new rows in the database, all newly generated identiers and database-generated defaults become available on the instance, either immediately or via load-on-rst-access. In this case, the entire row was reloaded on access because a new transaction was begun after we issued commit(). SQLAlchemy by default refreshes data from a previous transaction the rst time its accessed within a new transaction, so that the most recent state is available. The level of reloading is congurable as is described in Using the Session.
13
Session Object States As our User object moved from being outside the Session, to inside the Session without a primary key, to actually being inserted, it moved between three out of four available object states - transient, pending, and persistent. Being aware of these states and what they mean is always a good idea - be sure to read Quickie Intro to Object States for a quick overview.
>>> ed_user.name BEGIN (implicit) SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.id = ? (1,) ued >>> fake_user in session False issuing a SELECT illustrates the changes made to the database:
14
>>> session.query(User).filter(User.name.in_([ed, fakeuser])).all() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name IN (?, ?) (ed, fakeuser) [<User(ed,Ed Jones, f8s7ccs)>]
2.1.9 Querying
A Query object is created using the query() method on Session. This function takes a variable number of arguments, which can be any combination of classes and class-instrumented descriptors. Below, we indicate a Query which loads User instances. When evaluated in an iterative context, the list of User objects present is returned: >>> for instance in session.query(User).order_by(User.id): ... print instance.name, instance.fullname SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users ORDER BY users.id () ed Ed Jones wendy Wendy Williams mary Mary Contrary fred Fred Flinstone The Query also accepts ORM-instrumented descriptors as arguments. Any time multiple class entities or columnbased entities are expressed as arguments to the query() function, the return result is expressed as tuples: >>> for name, fullname in session.query(User.name, User.fullname): ... print name, fullname SELECT users.name AS users_name, users.fullname AS users_fullname FROM users () ed Ed Jones wendy Wendy Williams mary Mary Contrary fred Fred Flinstone The tuples returned by Query are named tuples, and can be treated much like an ordinary Python object. The names are the same as the attributes name for an attribute, and the class name for a class: >>> for row in session.query(User, User.name).all(): ... print row.User, row.name SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users () <User(ed,Ed Jones, f8s7ccs)> ed <User(wendy,Wendy Williams, foobar)> wendy 2.1. Object Relational Tutorial 15
<User(mary,Mary Contrary, xxg527)> mary <User(fred,Fred Flinstone, blah)> fred You can control the names of individual column expressions using the label() construct, which is available from any ColumnElement-derived object, as well as any class attribute which is mapped to one (such as User.name): >>> for row in session.query(User.name.label(name_label)).all(): ... print(row.name_label) SELECT users.name AS name_label FROM users () ed wendy mary fred The name given to a full entity such as User, assuming that multiple entities are present in the call to query(), can be controlled using aliased : >>> from sqlalchemy.orm import aliased >>> user_alias = aliased(User, name=user_alias) >>> for row in session.query(user_alias, user_alias.name).all(): ... print row.user_alias SELECT user_alias.id AS user_alias_id, user_alias.name AS user_alias_name, user_alias.fullname AS user_alias_fullname, user_alias.password AS user_alias_password FROM users AS user_alias () <User(ed,Ed Jones, f8s7ccs)> <User(wendy,Wendy Williams, foobar)> <User(mary,Mary Contrary, xxg527)> <User(fred,Fred Flinstone, blah)> Basic operations with Query include issuing LIMIT and OFFSET, most conveniently using Python array slices and typically in conjunction with ORDER BY: >>> for u in session.query(User).order_by(User.id)[1:3]: ... print u SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users ORDER BY users.id LIMIT ? OFFSET ? (2, 1) <User(wendy,Wendy Williams, foobar)> <User(mary,Mary Contrary, xxg527)> and ltering results, which is accomplished either with filter_by(), which uses keyword arguments: >>> for name, in session.query(User.name).\ ... filter_by(fullname=Ed Jones): ... print name SELECT users.name AS users_name FROM users WHERE users.fullname = ?
16
(Ed Jones,) ed ...or filter(), which uses more exible SQL expression language constructs. These allow you to use regular Python operators with the class-level attributes on your mapped class: >>> for name, in session.query(User.name).\ ... filter(User.fullname==Ed Jones): ... print name SELECT users.name AS users_name FROM users WHERE users.fullname = ? (Ed Jones,) ed The Query object is fully generative, meaning that most method calls return a new Query object upon which further criteria may be added. For example, to query for users named ed with a full name of Ed Jones, you can call filter() twice, which joins criteria using AND: >>> for user in session.query(User).\ ... filter(User.name==ed).\ ... filter(User.fullname==Ed Jones): ... print user SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? AND users.fullname = ? (ed, Ed Jones) <User(ed,Ed Jones, f8s7ccs)> Common Filter Operators Heres a rundown of some of the most common operators used in filter(): equals: query.filter(User.name == ed) not equals: query.filter(User.name != ed) LIKE: query.filter(User.name.like(%ed%)) IN: query.filter(User.name.in_([ed, wendy, jack])) # works with query objects too: query.filter(User.name.in_(session.query(User.name).filter(User.name.like(%ed%)))) NOT IN: query.filter(~User.name.in_([ed, wendy, jack])) IS NULL:
17
filter(User.name == None) IS NOT NULL: filter(User.name != None) AND: from sqlalchemy import and_ filter(and_(User.name == ed, User.fullname == Ed Jones)) # or call filter()/filter_by() multiple times filter(User.name == ed).filter(User.fullname == Ed Jones) OR: from sqlalchemy import or_ filter(or_(User.name == ed, User.name == wendy)) match: query.filter(User.name.match(wendy)) The contents of the match parameter are database backend specic. Returning Lists and Scalars The all(), one(), and first() methods of Query immediately issue SQL and return a non-iterator value. all() returns a list: >>> query = session.query(User).filter(User.name.like(%ed)).order_by(User.id) >>> query.all() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ? ORDER BY users.id (%ed,) [<User(ed,Ed Jones, f8s7ccs)>, <User(fred,Fred Flinstone, blah)>] first() applies a limit of one and returns the rst result as a scalar: >>> query.first() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ? ORDER BY users.id LIMIT ? OFFSET ? (%ed, 1, 0) <User(ed,Ed Jones, f8s7ccs)> one(), fully fetches all rows, and if not exactly one object identity or composite row is present in the result, raises an error: >>> from sqlalchemy.orm.exc import MultipleResultsFound >>> try: ... user = query.one()
18
... except MultipleResultsFound, e: ... print e SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ? ORDER BY users.id (%ed,) Multiple rows were found for one() >>> from sqlalchemy.orm.exc import NoResultFound >>> try: ... user = query.filter(User.id == 99).one() ... except NoResultFound, e: ... print e SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ? AND users.id = ? ORDER BY users.id (%ed, 99) No row was found for one() Using Literal SQL Literal strings can be used exibly with Query. Most methods accept strings in addition to SQLAlchemy clause constructs. For example, filter() and order_by(): >>> for user in session.query(User).\ ... filter("id<224").\ ... order_by("id").all(): ... print user.name SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE id<224 ORDER BY id () ed wendy mary fred Bind parameters can be specied with string-based SQL, using a colon. To specify the values, use the params() method: >>> session.query(User).filter("id<:value and name=:name").\ ... params(value=224, name=fred).order_by(User.id).one() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
19
FROM users WHERE id<? and name=? ORDER BY users.id (224, fred) <User(fred,Fred Flinstone, blah)> To use an entirely string-based statement, using from_statement(); just ensure that the columns clause of the statement contains the column names normally used by the mapper (below illustrated using an asterisk): >>> session.query(User).from_statement( ... "SELECT * FROM users where name=:name").\ ... params(name=ed).all() SELECT * FROM users where name=? (ed,) [<User(ed,Ed Jones, f8s7ccs)>] You can use from_statement() to go completely raw, using string names to identify desired columns: >>> session.query("id", "name", "thenumber12").\ ... from_statement("SELECT id, name, 12 as " ... "thenumber12 FROM users where name=:name").\ ... params(name=ed).all() SELECT id, name, 12 as thenumber12 FROM users where name=? (ed,) [(1, ued, 12)]
20
Query is constructed like the rest of SQLAlchemy, in that it tries to always allow falling back to a less automated, lower level approach to things. Accepting strings for all SQL fragments is a big part of that, so that you can bypass the need to organize SQL constructs if you know specically what string output youd like. But when using literal strings, the Query no longer knows anything about that part of the SQL construct being emitted, and has no ability to transform it to adapt to new contexts. For example, suppose we selected User objects and ordered by the name column, using a string to indicate name: >>> q = session.query(User.id, User.name) >>> q.order_by("name").all() SELECT users.id AS users_id, users.name AS users_name FROM users ORDER BY name () [(1, ued), (4, ufred), (3, umary), (2, uwendy)] Perfectly ne. But suppose, before we got a hold of the Query, some sophisticated transformations were applied to it, such as below where we use from_self(), a particularly advanced method, to retrieve pairs of user names with different numbers of characters: >>> from sqlalchemy import func >>> ua = aliased(User) >>> q = q.from_self(User.id, User.name, ua.name).\ ... filter(User.name < ua.name).\ ... filter(func.length(ua.name) != func.length(User.name)) The Query now represents a select from a subquery, where User is represented twice both inside and outside of the subquery. Telling the Query to order by name doesnt really give us much guarantee which name its going to order on. In this case it assumes name is against the outer aliased User construct: >>> q.order_by("name").all() SELECT anon_1.users_id AS anon_1_users_id, anon_1.users_name AS anon_1_users_name, users_1.name AS users_1_name FROM (SELECT users.id AS users_id, users.name AS users_name FROM users) AS anon_1, users AS users_1 WHERE anon_1.users_name < users_1.name AND length(users_1.name) != length(anon_1.users_name) ORDER BY name () [(1, ued, ufred), (1, ued, umary), (1, ued, uwendy), (3, umary, uwendy), ( Only if we use the SQL element directly, in this case User.name or ua.name, do we give Query enough information to know for sure which name wed like to order on, where we can see we get different results for each: >>> q.order_by(ua.name).all() SELECT anon_1.users_id AS anon_1_users_id, anon_1.users_name AS anon_1_users_name, users_1.name AS users_1_name FROM (SELECT users.id AS users_id, users.name AS users_name FROM users) AS anon_1, users AS users_1 WHERE anon_1.users_name < users_1.name AND length(users_1.name) != length(anon_1.users_name) ORDER BY users_1.name () [(1, ued, ufred), (1, ued, umary), (1, ued, uwendy), (3, umary, uwendy), ( >>> q.order_by(User.name).all() SELECT anon_1.users_id AS anon_1_users_id, anon_1.users_name AS anon_1_users_name, users_1.name AS users_1_name FROM (SELECT users.id AS users_id, users.name AS users_name 2.1. Object Relational Tutorial FROM users) AS anon_1, users AS users_1 WHERE anon_1.users_name < users_1.name AND length(users_1.name) != length(anon_1.users_name)
21
Counting Query includes a convenience method for counting called count(): >>> session.query(User).filter(User.name.like(%ed)).count() SELECT count(*) AS count_1 FROM (SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ?) AS anon_1 (%ed,) 2 The count() method is used to determine how many rows the SQL statement would return. Looking at the generated SQL above, SQLAlchemy always places whatever it is we are querying into a subquery, then counts the rows from that. In some cases this can be reduced to a simpler SELECT count(*) FROM table, however modern versions of SQLAlchemy dont try to guess when this is appropriate, as the exact SQL can be emitted using more explicit means. For situations where the thing to be counted needs to be indicated specically, we can specify the count function directly using the expression func.count(), available from the func construct. Below we use it to return the count of each distinct user name: >>> from sqlalchemy import func >>> session.query(func.count(User.name), User.name).group_by(User.name).all() SELECT count(users.name) AS count_1, users.name AS users_name FROM users GROUP BY users.name () [(1, ued), (1, ufred), (1, umary), (1, uwendy)] To achieve our simple SELECT count(*) FROM table, we can apply it as: >>> session.query(func.count(*)).select_from(User).scalar() SELECT count(?) AS count_1 FROM users (*,) 4 The usage of select_from() can be removed if we express the count in terms of the User primary key directly: >>> session.query(func.count(User.id)).scalar() SELECT count(users.id) AS count_1 FROM users () 4
22
>>> class Address(Base): ... __tablename__ = addresses ... id = Column(Integer, primary_key=True) ... email_address = Column(String, nullable=False) ... user_id = Column(Integer, ForeignKey(users.id)) ... ... user = relationship("User", backref=backref(addresses, order_by=id)) ... ... def __init__(self, email_address): ... self.email_address = email_address ... ... def __repr__(self): ... return "<Address(%s)>" % self.email_address The above class introduces the ForeignKey construct, which is a directive applied to Column that indicates that values in this column should be constrained to be values present in the named remote column. This is a core feature of relational databases, and is the glue that transforms an otherwise unconnected collection of tables to have rich overlapping relationships. The ForeignKey above expresses that values in the addresses.user_id column should be constrained to those values in the users.id column, i.e. its primary key. A second directive, known as relationship(), tells the ORM that the Address class itself should be linked to the User class, using the attribute Address.user. relationship() uses the foreign key relationships between the two tables to determine the nature of this linkage, determining that Address.user will be many-toone. A subdirective of relationship() called backref() is placed inside of relationship(), providing details about the relationship as expressed in reverse, that of a collection of Address objects on User referenced by User.addresses. The reverse side of a many-to-one relationship is always one-to-many. A full catalog of available relationship() congurations is at Basic Relational Patterns. The two complementing relationships Address.user and User.addresses are referred to as a bidirectional relationship, and is a key feature of the SQLAlchemy ORM. The section Linking Relationships with Backref discusses the backref feature in detail. Arguments to relationship() which concern the remote class can be specied using strings, assuming the Declarative system is in use. Once all mappings are complete, these strings are evaluated as Python expressions in order to produce the actual argument, in the above case the User class. The names which are allowed during this evaluation include, among other things, the names of all classes which have been created in terms of the declared base. Below we illustrate creation of the same addresses/user bidirectional relationship in terms of User instead of Address: class User(Base): # .... addresses = relationship("Address", order_by="Address.id", backref="user") See the docstring for relationship() for more detail on argument style. Did you know ? a FOREIGN KEY constraint in most (though not all) relational databases can only link to a primary key column, or a column that has a UNIQUE constraint. a FOREIGN KEY constraint that refers to a multiple column primary key, and itself has multiple columns, is known as a composite foreign key. It can also reference a subset of those columns. FOREIGN KEY columns can automatically update themselves, in response to a change in the referenced column or row. This is known as the CASCADE referential action, and is a built in function of the relational database. FOREIGN KEY can refer to its own table. This is referred to as a self-referential foreign key. Read more about foreign keys at Foreign Key - Wikipedia.
23
Well need to create the addresses table in the database, so we will issue another CREATE from our metadata, which will skip over tables which have already been created: >>> Base.metadata.create_all(engine) PRAGMA table_info("users") () PRAGMA table_info("addresses") () CREATE TABLE addresses ( id INTEGER NOT NULL, email_address VARCHAR NOT NULL, user_id INTEGER, PRIMARY KEY (id), FOREIGN KEY(user_id) REFERENCES users (id) ) () COMMIT
>>> jack = session.query(User).\ ... filter_by(name=jack).one() BEGIN (implicit) SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? (jack,) >>> jack <User(jack,Jack Bean, gjffdd)> Lets look at the addresses collection. Watch the SQL: >>> jack.addresses SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE ? = addresses.user_id ORDER BY addresses.id (5,) [<Address([email protected])>, <Address([email protected])>] When we accessed the addresses collection, SQL was suddenly issued. This is an example of a lazy loading relationship. The addresses collection is now loaded and behaves just like an ordinary list. Well cover ways to optimize the loading of this collection in a bit.
The actual SQL JOIN syntax, on the other hand, is most easily achieved using the Query.join() method: >>> session.query(User).join(Address).\ ... filter([email protected]).\ ... all() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users JOIN addresses ON users.id = addresses.user_id WHERE addresses.email_address = ? ([email protected],) [<User(jack,Jack Bean, gjffdd)>] Query.join() knows how to join between User and Address because theres only one foreign key between them. If there were no foreign keys, or several, Query.join() works better when one of the following forms are used: query.join(Address, User.id==Address.user_id) query.join(User.addresses) query.join(Address, User.addresses) query.join(addresses) # # # # explicit condition specify relationship from left to right same, with explicit target same, using a string
As you would expect, the same idea is used for outer joins, using the outerjoin() function: query.outerjoin(User.addresses) # LEFT OUTER JOIN
The reference documentation for join() contains detailed information and examples of the calling styles accepted by this method; join() is an important method at the center of usage for any SQL-uent application. Using Aliases When querying across multiple tables, if the same table needs to be referenced more than once, SQL typically requires that the table be aliased with another name, so that it can be distinguished against other occurrences of that table. The Query supports this most explicitly using the aliased construct. Below we join to the Address entity twice, to locate a user who has two distinct email addresses at the same time: >>> from sqlalchemy.orm import aliased >>> adalias1 = aliased(Address) >>> adalias2 = aliased(Address) >>> for username, email1, email2 in \ ... session.query(User.name, adalias1.email_address, adalias2.email_address).\ ... join(adalias1, User.addresses).\ ... join(adalias2, User.addresses).\ ... filter([email protected]).\ ... filter([email protected]): ... print username, email1, email2 SELECT users.name AS users_name, addresses_1.email_address AS addresses_1_email_address, addresses_2.email_address AS addresses_2_email_address FROM users JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id JOIN addresses AS addresses_2 ON users.id = addresses_2.user_id WHERE addresses_1.email_address = ? AND addresses_2.email_address = ?
26
([email protected], [email protected]) jack [email protected] [email protected] Using Subqueries The Query is suitable for generating statements which can be used as subqueries. Suppose we wanted to load User objects along with a count of how many Address records each user has. The best way to generate SQL like this is to get the count of addresses grouped by user ids, and JOIN to the parent. In this case we use a LEFT OUTER JOIN so that we get rows back for those users who dont have any addresses, e.g.: SELECT users.*, adr_count.address_count FROM users LEFT OUTER JOIN (SELECT user_id, count(*) AS address_count FROM addresses GROUP BY user_id) AS adr_count ON users.id=adr_count.user_id Using the Query, we build a statement like this from the inside out. The statement accessor returns a SQL expression representing the statement generated by a particular Query - this is an instance of a select() construct, which are described in SQL Expression Language Tutorial: >>> from sqlalchemy.sql import func >>> stmt = session.query(Address.user_id, func.count(*).\ ... label(address_count)).\ ... group_by(Address.user_id).subquery() The func keyword generates SQL functions, and the subquery() method on Query produces a SQL expression construct representing a SELECT statement embedded within an alias (its actually shorthand for query.statement.alias()). Once we have our statement, it behaves like a Table construct, such as the one we created for users at the start of this tutorial. The columns on the statement are accessible through an attribute called c: >>> for u, count in session.query(User, stmt.c.address_count).\ ... outerjoin(stmt, User.id==stmt.c.user_id).order_by(User.id): ... print u, count SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, anon_1.address_count AS anon_1_address_count FROM users LEFT OUTER JOIN (SELECT addresses.user_id AS user_id, count(?) AS address_count FROM addresses GROUP BY addresses.user_id) AS anon_1 ON users.id = anon_1.user_id ORDER BY users.id (*,) <User(ed,Ed Jones, f8s7ccs)> None <User(wendy,Wendy Williams, foobar)> None <User(mary,Mary Contrary, xxg527)> None <User(fred,Fred Flinstone, blah)> None <User(jack,Jack Bean, gjffdd)> 2 Selecting Entities from Subqueries Above, we just selected a result that included a column from a subquery. What if we wanted our subquery to map to an entity ? For this we use aliased() to associate an alias of a mapped class to a subquery:
27
>>> stmt = session.query(Address).\ ... filter(Address.email_address != [email protected]).\ ... subquery() >>> adalias = aliased(Address, stmt) >>> for user, address in session.query(User, adalias).\ ... join(adalias, User.addresses): ... print user, address SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, anon_1.id AS anon_1_id, anon_1.email_address AS anon_1_email_address, anon_1.user_id AS anon_1_user_id FROM users JOIN (SELECT addresses.id AS id, addresses.email_address AS email_address, addresses.user_id AS user_id FROM addresses WHERE addresses.email_address != ?) AS anon_1 ON users.id = anon_1.user_id ([email protected],) <User(jack,Jack Bean, gjffdd)> <Address([email protected])> Using EXISTS The EXISTS keyword in SQL is a boolean operator which returns True if the given expression contains any rows. It may be used in many scenarios in place of joins, and is also useful for locating rows which do not have a corresponding row in a related table. There is an explicit EXISTS construct, which looks like this: >>> from sqlalchemy.sql import exists >>> stmt = exists().where(Address.user_id==User.id) >>> for name, in session.query(User.name).filter(stmt): ... print name SELECT users.name AS users_name FROM users WHERE EXISTS (SELECT * FROM addresses WHERE addresses.user_id = users.id) () jack The Query features several operators which make usage of EXISTS automatically. Above, the statement can be expressed along the User.addresses relationship using any(): >>> for name, in session.query(User.name).\ ... filter(User.addresses.any()): ... print name SELECT users.name AS users_name FROM users WHERE EXISTS (SELECT 1 FROM addresses WHERE users.id = addresses.user_id)
28
() jack any() takes criterion as well, to limit the rows matched: >>> for name, in session.query(User.name).\ ... filter(User.addresses.any(Address.email_address.like(%g oogle%))): ... print name SELECT users.name AS users_name FROM users WHERE EXISTS (SELECT 1 FROM addresses WHERE users.id = addresses.user_id AND addresses.email_address LIKE ?) (%google%,) jack has() is the same operator as any() for many-to-one relationships (note the ~ operator here too, which means NOT): >>> session.query(Address).\ ... filter(~Address.user.has(User.name==jack)).all() SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE NOT (EXISTS (SELECT 1 FROM users WHERE users.id = addresses.user_id AND users.name = ?)) (jack,) [] Common Relationship Operators Heres all the operators which build on relationships - each one is linked to its API documentation which includes full details on usage and behavior: __eq__() (many-to-one equals comparison): query.filter(Address.user == someuser) __ne__() (many-to-one not equals comparison): query.filter(Address.user != someuser) IS NULL (many-to-one comparison, also uses __eq__()): query.filter(Address.user == None) contains() (used for one-to-many collections): query.filter(User.addresses.contains(someaddress)) any() (used for collections): query.filter(User.addresses.any(Address.email_address == bar)) # also takes keyword arguments: query.filter(User.addresses.any(email_address=bar)) has() (used for scalar references):
29
30
collection is loaded in one step. We illustrate loading the same addresses collection in this way - note that even though the User.addresses collection on jack is actually populated right now, the query will emit the extra join regardless: >>> from sqlalchemy.orm import joinedload >>> jack = session.query(User).\ ... options(joinedload(User.addresses)).\ ... filter_by(name=jack).one() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_address, addresses_1.user_id AS addresses_1_user_id FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id WHERE users.name = ? ORDER BY addresses_1.id (jack,) >>> jack <User(jack,Jack Bean, gjffdd)> >>> jack.addresses [<Address([email protected])>, <Address([email protected])>] Note that even though the OUTER JOIN resulted in two rows, we still only got one instance of User back. This is because Query applies a uniquing strategy, based on object identity, to the returned entities. This is specically so that joined eager loading can be applied without affecting the query results. While joinedload() has been around for a long time, subqueryload() is a newer form of eager loading. subqueryload() tends to be more appropriate for loading related collections while joinedload() tends to be better suited for many-to-one relationships, due to the fact that only one row is loaded for both the lead and the related object. joinedload() is not a replacement for join() The join created by joinedload() is anonymously aliased such that it does not affect the query results. An Query.order_by() or Query.filter() call cannot reference these aliased tables - so-called user space joins are constructed using Query.join(). The rationale for this is that joinedload() is only applied in order to affect how related objects or collections are loaded as an optimizing detail - it can be added or removed with no impact on actual results. See the section The Zen of Eager Loading for a detailed description of how this is used.
Explicit Join + Eagerload A third style of eager loading is when we are constructing a JOIN explicitly in order to locate the primary rows, and would like to additionally apply the extra table to a related object or collection on the primary object. This feature is supplied via the orm.contains_eager() function, and is most typically useful for pre-loading the many-to-one object on a query that needs to lter on that same object. Below we illustrate loading an Address row as well as the related User object, ltering on the User named jack and using orm.contains_eager() to apply the user columns to the Address.user attribute:
31
>>> from sqlalchemy.orm import contains_eager >>> jacks_addresses = session.query(Address).\ ... join(Address.user).\ ... filter(User.name==jack).\ ... options(contains_eager(Address.user)).\ ... all() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses JOIN users ON users.id = addresses.user_id WHERE users.name = ? (jack,) >>> jacks_addresses [<Address([email protected])>, <Address([email protected])>] >>> jacks_addresses[0].user <User(jack,Jack Bean, gjffdd)> For more information on eager loading, including how to congure various forms of loading by default, see the section Relationship Loading Techniques.
2.1.14 Deleting
Lets try to delete jack and see how that goes. Well mark as deleted in the session, then well issue a count query to see that no rows remain: >>> session.delete(jack) >>> session.query(User).filter_by(name=jack).count() UPDATE addresses SET user_id=? WHERE addresses.id = ? (None, 1) UPDATE addresses SET user_id=? WHERE addresses.id = ? (None, 2) DELETE FROM users WHERE users.id = ? (5,) SELECT count(*) AS count_1 FROM (SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ?) AS anon_1 (jack,) 0 So far, so good. How about Jacks Address objects ? >>> session.query(Address).filter( ... Address.email_address.in_([[email protected], [email protected]]) ... ).count() SELECT count(*) AS count_1
32
FROM (SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE addresses.email_address IN (?, ?)) AS anon_1 ([email protected], [email protected]) 2 Uh oh, theyre still there ! Analyzing the ush SQL, we can see that the user_id column of each address was set to NULL, but the rows werent deleted. SQLAlchemy doesnt assume that deletes cascade, you have to tell it to do so. Conguring delete/delete-orphan Cascade We will congure cascade options on the User.addresses relationship to change the behavior. While SQLAlchemy allows you to add new attributes and relationships to mappings at any point in time, in this case the existing relationship needs to be removed, so we need to tear down the mappings completely and start again - well close the Session: >>> session.close() and use a new declarative_base(): >>> Base = declarative_base() Next well declare the User class, adding in the addresses relationship including the cascade conguration (well leave the constructor out too):
>>> class User(Base): ... __tablename__ = users ... ... id = Column(Integer, primary_key=True) ... name = Column(String) ... fullname = Column(String) ... password = Column(String) ... ... addresses = relationship("Address", backref=user, cascade="all, delete, delete-or ... ... def __repr__(self): ... return "<User(%s,%s, %s)>" % (self.name, self.fullname, self.password) Then we recreate Address, noting that in this case weve created the Address.user relationship via the User class already: >>> class Address(Base): ... __tablename__ = addresses ... id = Column(Integer, primary_key=True) ... email_address = Column(String, nullable=False) ... user_id = Column(Integer, ForeignKey(users.id)) ... ... def __repr__(self): ... return "<Address(%s)>" % self.email_address Now when we load Jack (below using get(), which loads by primary key), removing an address from his addresses collection will result in that Address being deleted: # load Jack by primary key >>> jack = session.query(User).get(5) BEGIN (implicit)
33
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.id = ? (5,)
# remove one Address (lazy load fires off) >>> del jack.addresses[1] SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE ? = addresses.user_id (5,)
# only one address remains >>> session.query(Address).filter( ... Address.email_address.in_([[email protected], [email protected]]) ... ).count() DELETE FROM addresses WHERE addresses.id = ? (2,) SELECT count(*) AS count_1 FROM (SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE addresses.email_address IN (?, ?)) AS anon_1 ([email protected], [email protected]) 1 Deleting Jack will delete both Jack and his remaining Address: >>> session.delete(jack) >>> session.query(User).filter_by(name=jack).count() DELETE FROM addresses WHERE addresses.id = ? (1,) DELETE FROM users WHERE users.id = ? (5,) SELECT count(*) AS count_1 FROM (SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ?) AS anon_1 (jack,) 0 >>> session.query(Address).filter( ... Address.email_address.in_([[email protected], [email protected]])
34
... ).count() SELECT count(*) AS count_1 FROM (SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE addresses.email_address IN (?, ?)) AS anon_1 ([email protected], [email protected]) 0 More on Cascades Further detail on conguration of cascades is at Cascades. The cascade functionality can also integrate smoothly with the ON DELETE CASCADE functionality of the relational database. See Using Passive Deletes for details.
Above, we can see declaring a Table directly is a little different than declaring a mapped class. Table is a constructor function, so each individual Column argument is separated by a comma. The Column object is also given its name explicitly, rather than it being taken from an assigned attribute name. Next we dene BlogPost and Keyword, with a relationship() linked via the post_keywords table: >>> class BlogPost(Base): ... __tablename__ = posts ... ... id = Column(Integer, primary_key=True) ... user_id = Column(Integer, ForeignKey(users.id)) ... headline = Column(String(255), nullable=False) ... body = Column(Text) ... ... # many to many BlogPost<->Keyword ... keywords = relationship(Keyword, secondary=post_keywords, backref=posts) ... ... def __init__(self, headline, body, author): ... self.author = author ... self.headline = headline ... self.body = body ... ... def __repr__(self):
35
...
>>> class Keyword(Base): ... __tablename__ = keywords ... ... id = Column(Integer, primary_key=True) ... keyword = Column(String(50), nullable=False, unique=True) ... ... def __init__(self, keyword): ... self.keyword = keyword Above, the many-to-many relationship is BlogPost.keywords. The dening feature of a many-to-many relationship is the secondary keyword argument which references a Table object representing the association table. This table only contains columns which reference the two sides of the relationship; if it has any other columns, such as its own primary key, or foreign keys to other tables, SQLAlchemy requires a different usage pattern called the association object, described at Association Object. We would also like our BlogPost class to have an author eld. We will add this as another bidirectional relationship, except one issue well have is that a single user might have lots of blog posts. When we access User.posts, wed like to be able to lter results further so as not to load the entire collection. For this we use a setting accepted by relationship() called lazy=dynamic, which congures an alternate loader strategy on the attribute. To use it on the reverse side of a relationship(), we use the backref() function: >>> from sqlalchemy.orm import backref >>> # "dynamic" loading relationship to User >>> BlogPost.author = relationship(User, backref=backref(posts, lazy=dynamic)) Create new tables: >>> Base.metadata.create_all(engine) PRAGMA table_info("users") () PRAGMA table_info("addresses") () PRAGMA table_info("posts") () PRAGMA table_info("keywords") () PRAGMA table_info("post_keywords") () CREATE TABLE posts ( id INTEGER NOT NULL, user_id INTEGER, headline VARCHAR(255) NOT NULL, body TEXT, PRIMARY KEY (id), FOREIGN KEY(user_id) REFERENCES users (id) ) () COMMIT CREATE TABLE keywords ( id INTEGER NOT NULL, keyword VARCHAR(50) NOT NULL, PRIMARY KEY (id), UNIQUE (keyword)
36
) () COMMIT CREATE TABLE post_keywords ( post_id INTEGER, keyword_id INTEGER, FOREIGN KEY(post_id) REFERENCES posts (id), FOREIGN KEY(keyword_id) REFERENCES keywords (id) ) () COMMIT Usage is not too different from what weve been doing. Lets give Wendy some blog posts: >>> wendy = session.query(User).\ ... filter_by(name=wendy).\ ... one() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? (wendy,) >>> post = BlogPost("Wendys Blog Post", "This is a test", wendy) >>> session.add(post) Were storing keywords uniquely in the database, but we know that we dont have any yet, so we can just create them: >>> post.keywords.append(Keyword(wendy)) >>> post.keywords.append(Keyword(firstpost)) We can now look up all blog posts with the keyword rstpost. Well use the any operator to locate blog posts where any of its keywords has the keyword string rstpost: >>> session.query(BlogPost).\ ... filter(BlogPost.keywords.any(keyword=firstpost)).\ ... all() INSERT INTO keywords (keyword) VALUES (?) (wendy,) INSERT INTO keywords (keyword) VALUES (?) (firstpost,) INSERT INTO posts (user_id, headline, body) VALUES (?, ?, ?) (2, "Wendys Blog Post", This is a test) INSERT INTO post_keywords (post_id, keyword_id) VALUES (?, ?) ((1, 1), (1, 2)) SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body FROM posts WHERE EXISTS (SELECT 1 FROM post_keywords, keywords WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?)
37
(firstpost,) [BlogPost("Wendys Blog Post", This is a test, <User(wendy,Wendy Williams, foobar)> If we want to look up just Wendys posts, we can tell the query to narrow down to her as a parent:
>>> session.query(BlogPost).\ ... filter(BlogPost.author==wendy).\ ... filter(BlogPost.keywords.any(keyword=firstpost)).\ ... all() SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body FROM posts WHERE ? = posts.user_id AND (EXISTS (SELECT 1 FROM post_keywords, keywords WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?)) (2, firstpost) [BlogPost("Wendys Blog Post", This is a test, <User(wendy,Wendy Williams, foobar)> Or we can use Wendys own posts relationship, which is a dynamic relationship, to query straight from there:
>>> wendy.posts.\ ... filter(BlogPost.keywords.any(keyword=firstpost)).\ ... all() SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body FROM posts WHERE ? = posts.user_id AND (EXISTS (SELECT 1 FROM post_keywords, keywords WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?)) (2, firstpost) [BlogPost("Wendys Blog Post", This is a test, <User(wendy,Wendy Williams, foobar)>
38
mapper(User, user, properties={ addresses:relationship(Address, order_by=address.c.id, backref="user") }) mapper(Address, address) When the above is complete we now have a Table/mapper() setup the same as that set up using Declarative in the tutorial. Note that the mappings do not have the benet of the instrumented User and Address classes available, nor is the string argument system of relationship() available, as this is a feature of Declarative. The order_by argument of the User.addresses relationship is dened in terms of the actual address table instead of the Address class. Its also worth noting that the Classical and Declarative mapping systems are not in any way exclusive of each other. The two can be mixed freely - below we can dene a new class Order using a declarative base, which links back to User- no problem, except that we cant specify User as a string since its not available in the base registry:
39
from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Order(Base): __tablename__ = order id = Column(Integer, primary_key=True) user_id = Column(ForeignKey(user.id)) order_number = Column(String(50)) user = relationship(User, backref="orders") This reference document uses a mix of Declarative and classical mappings for examples. However, all patterns here apply both to the usage of explicit mapper() and Table objects as well as when using Declarative, where options that are specic to the mapper() function can be specied with Declarative via the __mapper__ attribute. Any example in this section which takes a form such as: mapper(User, user_table, primary_key=[user_table.c.id]) Would translate into declarative as: class User(Base): __table__ = user_table __mapper_args__ = { primary_key:[user_table.c.id] } Column objects which are declared inline can also be used directly in __mapper_args__: class User(Base): __tablename__ = user id = Column(Integer) __mapper_args__ = { primary_key:[id] }
40
In a classical mapping, the Column objects can be placed directly in the properties dictionary using an alternate key: mapper(User, user_table, properties={ id: user_table.c.user_id, name: user_table.c.user_name, }) When mapping to an already constructed Table, a prex can be specied using the column_prefix option, which will cause the automated mapping of each Column to name the attribute starting with the given prex, prepended to the actual Column name: class User(Base): __table__ = user_table __mapper_args__ = {column_prefix:_} The above will place attribute names such as _user_id, _user_name, _password etc. on the mapped User class. The classical version of the above: mapper(User, user_table, column_prefix=_) Mapping Multiple Columns to a Single Attribute To place multiple columns which are known to be synonymous based on foreign key relationship or join condition into the same mapped attribute, they can be mapped as a list. Below we map to a join(): from sqlalchemy import join, Table, Column, String, Integer, ForeignKey from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() user_table = Table(user, Base.metadata, Column(id, Integer, primary_key=True), Column(name, String(50)), Column(fullname, String(50)), Column(password, String(12)) ) address_table = Table(address, Base.metadata, Column(id, Integer, primary_key=True), Column(user_id, Integer, ForeignKey(user.id)), Column(email_address, String(50)) ) # "user JOIN address ON user.id=address.user_id" useraddress = join(user_table, address_table, \ user_table.c.id == address_table.c.user_id) class User(Base): __table__ = useraddress # assign "user.id", "address.user_id" to the # "id" attribute id = [user_table.c.id, address_table.c.user_id]
41
# assign "address.id" to the "address_id" # attribute, to avoid name conflicts address_id = address_table.c.id In the above mapping, the value assigned to user.id will also be persisted to the address.user_id column during a ush. The two columns are also not independently queryable from the perspective of the mapped class (they of course are still available from their original tables). Classical version: mapper(User, useraddress, properties={ id:[user_table.c.id, address_table.c.user_id], address_id:address_table.c.id }) For further examples on this particular use case, see Mapping a Class against Multiple Tables. Using column_property for column level options The mapping of a Column with a particular mapper() can be customized using the orm.column_property() function. This function explicitly creates the ColumnProperty object which handles the job of mapping a Column, instead of relying upon the mapper() function to create it automatically. Used with Declarative, the Column can be embedded directly into the function: from sqlalchemy.orm import column_property class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = column_property(Column(String(50)), active_history=True) Or with a classical mapping, in the properties dictionary: from sqlalchemy.orm import column_property mapper(User, user, properties={ name:column_property(user.c.name, active_history=True) }) Further examples of orm.column_property() are at SQL Expressions as Mapped Attributes. sqlalchemy.orm.column_property(*args, **kwargs) Provide a column-level property for use with a Mapper. Column-based properties can normally be applied to the mappers properties dictionary using the Column element directly. Use this function when the given column is not directly present within the mappers selectable; examples include SQL expressions, functions, and scalar SELECT queries. Columns that arent present in the mappers selectable wont be persisted by the mapper and are effectively read-only attributes. Parameters *cols list of Column objects to be mapped. active_history=False When True, indicates that the previous value for a scalar attribute should be loaded when replaced, if not already loaded. Normally, history tracking logic for simple non-primary-key scalar values only needs to be aware of the new value in order to perform a ush. This ag is available for applications that make use of
42
attributes.get_history() which also need to know the previous value of the attribute. (new in 0.6.6) comparator_factory a class which extends ColumnProperty.Comparator which provides custom SQL clause generation for comparison operations. group a group name for this property when marked as deferred. deferred when True, the column property is deferred, meaning that it does not load immediately, and is instead loaded when the attribute is rst accessed on an instance. See also deferred(). doc optional string that will be applied as the doc on the class-bound descriptor. expire_on_ush=True Disable expiry on ush. A column_property() which refers to a SQL expression (and not a single table-bound column) is considered to be a read only property; populating it has no effect on the state of data, and it can only return database state. For this reason a column_property()s value is expired whenever the parent object is involved in a ush, that is, has any kind of dirty state within a ush. Setting this parameter to False will have the effect of leaving any existing value present after the ush proceeds. Note however that the Session with default expiration settings still expires all attributes after a Session.commit() call, however. New in 0.7.3. extension an AttributeExtension instance, or list of extensions, which will be prepended to the list of attribute listeners for the resulting descriptor placed on the class. Deprecated. Please see AttributeEvents. Mapping a Subset of Table Columns To reference a subset of columns referenced by a table as mapped attributes, use the include_properties or exclude_properties arguments. For example: mapper(User, user_table, include_properties=[user_id, user_name]) ...will map the User class to the user_table table, only including the user_id and user_name columns - the rest are not refererenced. Similarly: mapper(Address, address_table, exclude_properties=[street, city, state, zip]) ...will map the Address class to the address_table table, including all columns present except street, city, state, and zip. When this mapping is used, the columns that are not included will not be referenced in any SELECT statements emitted by Query, nor will there be any mapped attribute on the mapped class which represents the column; assigning an attribute of that name will have no effect beyond that of a normal Python attribute assignment. In some cases, multiple columns may have the same name, such as when mapping to a join of two or more tables that share some column name. To exclude or include individual columns, Column objects may also be placed within the include_properties and exclude_properties collections (new feature as of 0.6.4): mapper(UserAddress, user_table.join(addresse_table), exclude_properties=[address_table.c.id], primary_key=[user_table.c.id] ) It should be noted that insert and update defaults congured on individal Column objects, such as those congured by the default, update, server_default and server_onupdate arguments, will continue to function normally even if those Column objects are not mapped. This functionality is part of the SQL expression and execution system and occurs below the level of the ORM.
43
Classical mappings as always place the usage of orm.deferred() in the properties dictionary against the table-bound Column: mapper(Book, book_table, properties={ photo:deferred(book_table.c.photo) }) Deferred columns can be associated with a group name, so that they load together when any of them are rst accessed. The example below denes a mapping with a photos deferred group. When one .photo is accessed, all three photos will be loaded in one SELECT statement. The .excerpt will be loaded separately when it is accessed: class Book(Base): __tablename__ = book book_id = Column(Integer, primary_key=True) title = Column(String(200), nullable=False) summary = Column(String(2000)) excerpt = deferred(Column(Text)) photo1 = deferred(Column(Binary), group=photos) photo2 = deferred(Column(Binary), group=photos) photo3 = deferred(Column(Binary), group=photos) You can defer or undefer columns at the Query level using the orm.defer() and orm.undefer() query options: from sqlalchemy.orm import defer, undefer query = session.query(Book) query.options(defer(summary)).all() query.options(undefer(excerpt)).all() And an entire deferred group, i.e. which uses the group keyword argument to orm.deferred(), can be undeferred using orm.undefer_group(), sending in the group name: from sqlalchemy.orm import undefer_group query = session.query(Book) query.options(undefer_group(photos)).all()
44
Column Deferral API sqlalchemy.orm.deferred(*columns, **kwargs) Return a DeferredColumnProperty, which indicates this object attributes should only be loaded from its corresponding table column when rst accessed. Used with the properties dictionary sent to mapper(). See also: Deferred Column Loading sqlalchemy.orm.defer(*key) Return a MapperOption that will convert the column property of the given name into a deferred load. Used with Query.options(). e.g.: from sqlalchemy.orm import defer query(MyClass).options(defer("attribute_one"), defer("attribute_two")) A class bound descriptor is also accepted: query(MyClass).options( defer(MyClass.attribute_one), defer(MyClass.attribute_two)) A path can be specied onto a related or collection object using a dotted name. The orm.defer() option will be applied to that object when loaded: query(MyClass).options( defer("related.attribute_one"), defer("related.attribute_two")) To specify a path via class, send multiple arguments: query(MyClass).options( defer(MyClass.related, MyOtherClass.attribute_one), defer(MyClass.related, MyOtherClass.attribute_two)) See also: Deferred Column Loading Parameters *key A key representing an individual path. Multiple entries are accepted to allow a multiple-token path for a single target, not multiple targets. sqlalchemy.orm.undefer(*key) Return a MapperOption that will convert the column property of the given name into a non-deferred (regular column) load. Used with Query.options(). e.g.:
45
from sqlalchemy.orm import undefer query(MyClass).options(undefer("attribute_one"), undefer("attribute_two")) A class bound descriptor is also accepted: query(MyClass).options( undefer(MyClass.attribute_one), undefer(MyClass.attribute_two)) A path can be specied onto a related or collection object using a dotted name. The orm.undefer() option will be applied to that object when loaded: query(MyClass).options( undefer("related.attribute_one"), undefer("related.attribute_two")) To specify a path via class, send multiple arguments: query(MyClass).options( undefer(MyClass.related, MyOtherClass.attribute_one), undefer(MyClass.related, MyOtherClass.attribute_two)) See also: orm.undefer_group() as a means to undefer a group of attributes at once. Deferred Column Loading Parameters *key A key representing an individual path. Multiple entries are accepted to allow a multiple-token path for a single target, not multiple targets. sqlalchemy.orm.undefer_group(name) Return a MapperOption that will convert the given group of deferred column properties into a non-deferred (regular column) load. Used with Query.options(). e.g.: query(MyClass).options(undefer("group_one")) See also: Deferred Column Loading Parameters name String name of the deferred group. This name is established using the group name to the orm.deferred() congurational function.
46
from sqlalchemy.orm import column_property class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) firstname = Column(String(50)) lastname = Column(String(50)) fullname = column_property(firstname + " " + lastname) Correlated subqueries may be used as well. Below we use the select() construct to create a SELECT that links together the count of Address objects available for a particular User: from sqlalchemy.orm import column_property from sqlalchemy import select, func from sqlalchemy import Column, Integer, String, ForeignKey from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Address(Base): __tablename__ = address id = Column(Integer, primary_key=True) user_id = Column(Integer, ForeignKey(user.id)) class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) address_count = column_property( select([func.count(Address.id)]).\ where(Address.user_id==id) ) If import issues prevent the column_property() from being dened inline with the class, it can be assigned to the class after both are congured. In Declarative this has the effect of calling Mapper.add_property() to add an additional property after the fact: User.address_count = column_property( select([func.count(Address.id)]).\ where(Address.user_id==User.id) ) For many-to-many relationships, use and_() to join the elds of the association table to both tables in a relation, illustrated here with a classical mapping: from sqlalchemy import and_ mapper(Author, authors, properties={ book_count: column_property( select([func.count(books.c.id)], and_( book_authors.c.author_id==authors.c.id, book_authors.c.book_id==books.c.id ))) })
47
Alternatives to column_property() orm.column_property() is used to provide the effect of a SQL expression that is actively rendered into the SELECT generated for a particular mapped class. For the typical attribute that represents a composed value, its often simpler and more efcient to just dene it as a Python property, which is evaluated as it is invoked on instances after theyve been loaded: class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) firstname = Column(String(50)) lastname = Column(String(50)) @property def fullname(self): return self.firstname + " " + self.lastname To emit SQL queries from within a @property, the Session associated with the instance can be acquired using object_session(), which will provide the appropriate transactional context from which to emit a statement: from sqlalchemy.orm import object_session from sqlalchemy import select, func class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) firstname = Column(String(50)) lastname = Column(String(50)) @property def address_count(self): return object_session(self).\ scalar( select([func.count(Address.id)]).\ where(Address.user_id==self.id) ) For more information on using descriptors, including how they can be smoothly integrated into SQL expressions, see Using Descriptors.
48
email = Column(String) @validates(email) def validate_email(self, key, address): assert @ in address return address Validators also receive collection events, when items are added to a collection: from sqlalchemy.orm import validates class User(Base): # ... addresses = relationship("Address") @validates(addresses) def validate_address(self, key, address): assert @ in address.email return address Note that the validates() decorator is a convenience function built on top of attribute events. An application that requires more control over conguration of attribute change behavior can make use of this system, described at AttributeEvents. sqlalchemy.orm.validates(*names) Decorate a method as a validator for one or more named properties. Designates a method as a validator, a method which receives the name of the attribute as well as a value to be assigned, or in the case of a collection, the value to be added to the collection. The function can then raise validation exceptions to halt the process from continuing (where Pythons built-in ValueError and AssertionError exceptions are reasonable choices), or can modify or replace the value before proceeding. The function should otherwise return the given value. Note that a validator for a collection cannot issue a load of that collection within the validation routine - this usage raises an assertion to avoid recursion overows. This is a reentrant condition which is not supported. Using Descriptors A more comprehensive way to produce modied behavior for an attribute is to use descriptors. These are commonly used in Python using the property() function. The standard SQLAlchemy technique for descriptors is to create a plain descriptor, and to have it read/write from a mapped attribute with a different name. Below we illustrate this using Python 2.6-style properties: class EmailAddress(Base): __tablename__ = email_address id = Column(Integer, primary_key=True) # name the attribute with an underscore, # different from the column name _email = Column("email", String) # then create an ".email" attribute # to get/set "._email" @property def email(self): 2.2. Mapper Conguration 49
return self._email @email.setter def email(self, email): self._email = email The approach above will work, but theres more we can add. While our EmailAddress object will shuttle the value through the email descriptor and into the _email mapped attribute, the class level EmailAddress.email attribute does not have the usual expression semantics usable with Query. To provide these, we instead use the hybrid extension as follows: from sqlalchemy.ext.hybrid import hybrid_property class EmailAddress(Base): __tablename__ = email_address id = Column(Integer, primary_key=True) _email = Column("email", String) @hybrid_property def email(self): return self._email @email.setter def email(self, email): self._email = email The .email attribute, in addition to providing getter/setter behavior when we have an instance of EmailAddress, also provides a SQL expression when used at the class level, that is, from the EmailAddress class directly: from sqlalchemy.orm import Session session = Session() address = session.query(EmailAddress).\ filter(EmailAddress.email == [email protected]).\ one() SELECT address.email AS address_email, address.id AS address_id FROM address WHERE address.email = ? ([email protected],)
address.email = [email protected] session.commit() UPDATE address SET email=? WHERE address.id = ? ([email protected], 1) COMMIT The hybrid_property also allows us to change the behavior of the attribute, including dening separate behaviors when the attribute is accessed at the instance level versus at the class/expression level, using the hybrid_property.expression() modier. Such as, if we wanted to add a host name automatically, we might dene two sets of string manipulation logic: class EmailAddress(Base): __tablename__ = email_address
50
id = Column(Integer, primary_key=True) _email = Column("email", String) @hybrid_property def email(self): """Return the value of _email up until the last twelve characters.""" return self._email[:-12] @email.setter def email(self, email): """Set the value of _email, tacking on the twelve character value @example.com.""" self._email = email + "@example.com" @email.expression def email(cls): """Produce a SQL expression that represents the value of the _email column, minus the last twelve characters.""" return func.substr(cls._email, 0, func.length(cls._email) - 12) Above, accessing the email property of an instance of EmailAddress will return the value of the _email attribute, removing or adding the hostname @example.com from the value. When we query against the email attribute, a SQL function is rendered which produces the same effect: address = session.query(EmailAddress).filter(EmailAddress.email == address).one() SELECT address.email AS address_email, address.id AS address_id FROM address WHERE substr(address.email, ?, length(address.email) - ?) = ? (0, 12, address) Read more about Hybrids at Hybrid Attributes. Synonyms Synonyms are a mapper-level construct that applies expression behavior to a descriptor based attribute. The functionality of synonym is superceded as of 0.7 by hybrid attributes. sqlalchemy.orm.synonym(name, map_column=False, descriptor=None, comparator_factory=None, doc=None) Denote an attribute name as a synonym to a mapped property. Note: synonym() is superseded as of 0.7 by the hybrid extension. See the documentation for hybrids at Hybrid Attributes. Used with the properties dictionary sent to mapper(): class MyClass(object): def _get_status(self): return self._status def _set_status(self, value): self._status = value
51
status = property(_get_status, _set_status) mapper(MyClass, sometable, properties={ "status":synonym("_status", map_column=True) }) Above, the status attribute of MyClass will produce expression behavior against the table column named status, using the Python attribute _status on the mapped class to represent the underlying value. Parameters name the name of the existing mapped property, which can be any other MapperProperty including column-based properties and relationships. map_column if True, an additional ColumnProperty is created on the mapper automatically, using the synonyms name as the keyname of the property, and the keyname of this synonym() as the name of the column to map. Custom Comparators The expressions returned by comparison operations, such as User.name==ed, can be customized, by implementing an object that explicitly denes each comparison method needed. This is a relatively rare use case which generally applies only to highly customized types. Usually, custom SQL behaviors can be associated with a mapped class by composing together the classes existing mapped attributes with other expression components, using either mapped SQL expressions as those described in SQL Expressions as Mapped Attributes, or so-called hybrid attributes as described at Hybrid Attributes. Those approaches should be considered rst before resorting to custom comparison objects. Each of orm.column_property(), composite(), relationship(), and comparable_property() accept an argument called comparator_factory. A subclass of PropComparator can be provided for this argument, which can then reimplement basic Python comparison methods such as __eq__(), __ne__(), __lt__(), and so on. Its best to subclass the PropComparator subclass provided by each type of property. For example, to allow a column-mapped attribute to do case-insensitive comparison: from sqlalchemy.orm.properties import ColumnProperty from sqlalchemy.sql import func class MyComparator(ColumnProperty.Comparator): def __eq__(self, other): return func.lower(self.__clause_element__()) == func.lower(other) mapper(EmailAddress, address_table, properties={ email:column_property(address_table.c.email, comparator_factory=MyComparator) }) Above, comparisons on the email column are wrapped in the SQL lower() function to produce case-insensitive matching: >>> str(EmailAddress.email == [email protected]) lower(address.email) = lower(:lower_1) When building a PropComparator, the __clause_element__() method should be used in order to acquire the underlying mapped column. This will return a column that is appropriately wrapped in any kind of subquery or aliasing that has been applied in the context of the generated SQL statement.
52
sqlalchemy.orm.comparable_property(comparator_factory, descriptor=None) Provides a method of applying a PropComparator to any Python descriptor attribute. Note: comparable_property() is superseded as of 0.7 by the hybrid extension. See the example at Building Custom Comparators. Allows a regular Python @property (descriptor) to be used in queries and SQL constructs like a managed attribute. comparable_property wraps a descriptor with a proxy that directs operator overrides such as == (__eq__) to the supplied comparator but proxies everything else through to the original descriptor. Used with the properties dictionary sent to mapper(): from from from from sqlalchemy.orm import mapper, comparable_property sqlalchemy.orm.interfaces import PropComparator sqlalchemy.sql import func sqlalchemy import Table, MetaData, Integer, String, Column
metadata = MetaData() word_table = Table(word, metadata, Column(id, Integer, primary_key=True), Column(word, String(200), nullable=False) ) class CaseInsensitiveComparator(PropComparator): def __clause_element__(self): return self.prop def __eq__(self, other): return func.lower(self.__clause_element__()) == func.lower(other) class SearchWord(object): pass mapper(SearchWord, word_table, properties={ word_insensitive: comparable_property(CaseInsensitiveComparator) }) A mapping like the above allows the word_insensitive attribute to render an expression like: >>> print SearchWord.word_insensitive == "Trucks" lower(:lower_1) = lower(:lower_2) Parameters comparator_factory A PropComparator subclass or factory that denes operator behavior for this property. descriptor Optional when used in a properties={} declaration. The Python descriptor or property to layer comparison behavior on top of. The like-named descriptor will be automatically retreived from the mapped class if left blank in a properties declaration.
53
54
A classical mapping above would dene each composite() against the existing table: mapper(Vertex, vertice_table, properties={ start:composite(Point, vertice_table.c.x1, vertice_table.c.y1), end:composite(Point, vertice_table.c.x2, vertice_table.c.y2), }) We can now persist and use Vertex instances, as well as query for them, using the .start and .end attributes against ad-hoc Point instances: >>> v = Vertex(start=Point(3, 4), end=Point(5, 6)) >>> session.add(v) >>> q = session.query(Vertex).filter(Vertex.start == Point(3, 4)) >>> print q.first().start BEGIN (implicit) INSERT INTO vertice (x1, y1, x2, y2) VALUES (?, ?, ?, ?) (3, 4, 5, 6) SELECT vertice.id AS vertice_id, vertice.x1 AS vertice_x1, vertice.y1 AS vertice_y1, vertice.x2 AS vertice_x2, vertice.y2 AS vertice_y2 FROM vertice WHERE vertice.x1 = ? AND vertice.y1 = ? LIMIT ? OFFSET ? (3, 4, 1, 0) Point(x=3, y=4) sqlalchemy.orm.composite(class_, *cols, **kwargs) Return a composite column-based property for use with a Mapper. See the mapping documention section Composite Column Types for a full usage example. Parameters class_ The composite type class. *cols List of Column objects to be mapped. active_history=False When True, indicates that the previous value for a scalar attribute should be loaded when replaced, if not already loaded. See the same ag on column_property(). (This ag becomes meaningful specically for composite() in 0.7 - previously it was a placeholder). group A group name for this property when marked as deferred. deferred When True, the column property is deferred, meaning that it does not load immediately, and is instead loaded when the attribute is rst accessed on an instance. See also deferred(). comparator_factory a class which extends CompositeProperty.Comparator which provides custom SQL clause generation for comparison operations. doc optional string that will be applied as the doc on the class-bound descriptor. extension an AttributeExtension instance, or list of extensions, which will be prepended to the list of attribute listeners for the resulting descriptor placed on the class. Deprecated. Please see AttributeEvents.
55
Tracking In-Place Mutations on Composites As of SQLAlchemy 0.7, in-place changes to an existing composite value are not tracked automatically. Instead, the composite class needs to provide events to its parent object explicitly. This task is largely automated via the usage of the MutableComposite mixin, which uses events to associate each user-dened composite object with all parent associations. Please see the example in Establishing Mutability on Composites. Redening Comparison Operations for Composites The equals comparison operation by default produces an AND of all corresponding columns equated to one another. This can be changed using the comparator_factory, described in Custom Comparators. Below we illustrate the greater than operator, implementing the same expression that the base greater than does: from sqlalchemy.orm.properties import CompositeProperty from sqlalchemy import sql class PointComparator(CompositeProperty.Comparator): def __gt__(self, other): """redefine the greater than operation""" return sql.and_(*[a>b for a, b in zip(self.__clause_element__().clauses, other.__composite_values__())]) class Vertex(Base): ___tablename__ = vertice id x1 y1 x2 y2 = = = = = Column(Integer, primary_key=True) Column(Integer) Column(Integer) Column(Integer) Column(Integer)
start = composite(Point, x1, y1, comparator_factory=PointComparator) end = composite(Point, x2, y2, comparator_factory=PointComparator)
56
# map to it - the identity of an AddressUser object will be # based on (user_id, address_id) since those are the primary keys involved mapper(AddressUser, j, properties={ user_id: [user_table.c.user_id, address_table.c.user_id] }) Note that the list of columns is equivalent to the usage of orm.column_property() with multiple columns: from sqlalchemy.orm import mapper, column_property mapper(AddressUser, j, properties={ user_id: column_property(user_table.c.user_id, address_table.c.user_id) }) The usage of orm.column_property() is required when using declarative to map to multiple columns, since the declarative class parser wont recognize a plain list of columns: from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class AddressUser(Base): __table__ = j user_id = column_property(user_table.c.user_id, address_table.c.user_id) A second example: from sqlalchemy.sql import join # many-to-many join on an association table j = join(user_table, userkeywords, user_table.c.user_id==userkeywords.c.user_id).join(keywords, userkeywords.c.keyword_id==keywords.c.keyword_id) # a class class KeywordUser(object): pass # map to it - the identity of a KeywordUser object will be # (user_id, keyword_id) since those are the primary keys involved mapper(KeywordUser, j, properties={ user_id: [user_table.c.user_id, userkeywords.c.user_id], keyword_id: [userkeywords.c.keyword_id, keywords.c.keyword_id] }) In both examples above, composite columns were added as properties to the mappers; these are aggregations of multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same value.
57
subq = select([ func.count(orders.c.id).label(order_count), func.max(orders.c.price).label(highest_order), orders.c.customer_id ]).group_by(orders.c.customer_id).alias() s = select([customers,subq]).\ where(customers.c.customer_id==subq.c.customer_id) class Customer(object): pass mapper(Customer, s) Above, the customers table is joined against the orders table to produce a full row for each customer row, the total count of related rows in the orders table, and the highest price in the orders table. That query is then mapped against the Customer class. New instances of Customer will contain attributes for each column in the customers table as well as an order_count and highest_order attribute. Updates to the Customer object will only be reected in the customers table and not the orders table. This is because the primary key columns of the orders table are not represented in this mapper and therefore the table is not affected by save or delete operations.
58
59
Parameters class_ The class to be mapped. local_table The table to which the class is mapped, or None if this mapper inherits from another mapper using concrete table inheritance. always_refresh If True, all query operations for this mapped class will overwrite all data within object instances that already exist within the session, erasing any in-memory changes with whatever information was loaded from the database. Usage of this ag is highly discouraged; as an alternative, see the method Query.populate_existing(). allow_null_pks This ag is deprecated - this is stated as allow_partial_pks which defaults to True. allow_partial_pks Defaults to True. Indicates that a composite primary key with some NULL values should be considered as possibly existing within the database. This affects whether a mapper will assign an incoming row to an existing identity, as well as if Session.merge() will check the database rst for a particular primary key value. A partial primary key can occur if one has mapped to an OUTER JOIN, for example. batch Indicates that save operations of multiple entities can be batched together for efciency. setting to False indicates that an instance will be fully saved before saving the next instance, which includes inserting/updating all table rows corresponding to the entity as well as calling all MapperExtension methods corresponding to the save operation. column_prex A string which will be prepended to the key name of all Column objects when creating column-based properties from the given Table. Does not affect explicitly specied column-based properties concrete If True, indicates this mapper should use concrete table inheritance with its parent mapper. exclude_properties A list or set of string column names to be excluded from mapping. As of SQLAlchemy 0.6.4, this collection may also include Column objects. Columns named or present in this list will not be automatically mapped. Note that neither this option nor include_properties will allow one to circumvent plan Python inheritance - if mapped class B inherits from mapped class A, no combination of includes or excludes will allow B to have fewer properties than its superclass, A. extension A MapperExtension instance or list of MapperExtension instances which will be applied to all operations by this Mapper. Deprecated. Please see MapperEvents. include_properties An inclusive list or set of string column names to map. As of SQLAlchemy 0.6.4, this collection may also include Column objects in order to disambiguate between same-named columns in a selectable (such as a join()). If this list is not None, columns present in the mapped table but not named or present in this list will not be automatically mapped. See also exclude_properties. inherits Another Mapper for which this Mapper will have an inheritance relationship with. inherit_condition For joined table inheritance, a SQL expression (constructed ClauseElement) which will dene how the two tables are joined; defaults to a natural join between the two tables. inherit_foreign_keys When inherit_condition is used and the condition contains no ForeignKey columns, specify the foreign columns of the join condition in this list. else leave as None.
60
non_primary Construct a Mapper that will dene only the selection of instances, not their persistence. Any number of non_primary mappers may be created for a particular class. order_by A single Column or list of Column objects for which selection operations should use as the default ordering for entities. Defaults to the OID/ROWID of the table if any, or the rst primary key column of the table. passive_updates Indicates UPDATE behavior of foreign keys when a primary key changes on a joined-table inheritance or other joined table mapping. When True, it is assumed that ON UPDATE CASCADE is congured on the foreign key in the database, and that the database will handle propagation of an UPDATE from a source column to dependent rows. Note that with databases which enforce referential integrity (i.e. PostgreSQL, MySQL with InnoDB tables), ON UPDATE CASCADE is required for this operation. The relationship() will update the value of the attribute on related items which are locally present in the session during a ush. When False, it is assumed that the database does not enforce referential integrity and will not be issuing its own CASCADE operation for an update. The relationship() will issue the appropriate UPDATE statements to the database in response to the change of a referenced key, and items locally present in the session during a ush will also be refreshed. This ag should probably be set to False if primary key changes are expected and the database in use doesnt support CASCADE (i.e. SQLite, MySQL MyISAM tables). Also see the passive_updates ag on relationship(). A future SQLAlchemy release will provide a detect feature for this ag. polymorphic_on Used with mappers in an inheritance relationship, a Column which will identify the class/mapper combination to be used with a particular row. Requires the polymorphic_identity value to be set for all mappers in the inheritance hierarchy. The column specied by polymorphic_on is usually a column that resides directly within the base mappers mapped table; alternatively, it may be a column that is only present within the <selectable> portion of the with_polymorphic argument. polymorphic_identity A value which will be stored in the Column denoted by polymorphic_on, corresponding to the class identity of this mapper. properties A dictionary mapping the string names of object attributes to MapperProperty instances, which dene the persistence behavior of that attribute. Note that the columns in the mapped table are automatically converted into ColumnProperty instances based on the key property of each Column (although they can be overridden using this dictionary). primary_key A list of Column objects which dene the primary key to be used against this mappers selectable unit. This is normally simply the primary key of the local_table, but can be overridden here. version_id_col A Column which must have an integer type that will be used to keep a running version id of mapped entities in the database. this is used during save operations to ensure that no other thread or process has updated the instance during the lifetime of the entity, else a StaleDataError exception is thrown. version_id_generator A callable which denes the algorithm used to generate new version ids. Defaults to an integer generator. Can be replaced with one that generates timestamps, uuids, etc. e.g.: import uuid
61
mapper(Cls, table, version_id_col=table.c.version_uuid, version_id_generator=lambda version:uuid.uuid4().hex ) The callable receives the current version identier as its single argument. with_polymorphic A tuple in the form (<classes>, <selectable>) indicating the default style of polymorphic loading, that is, which tables are queried at once. <classes> is any single or list of mappers and/or classes indicating the inherited classes that should be loaded at once. The special value * may be used to indicate all descending classes should be loaded immediately. The second tuple argument <selectable> indicates a selectable that will be used to query for multiple classes. Normally, it is left as None, in which case this mapper will form an outer join from the base mappers table to that of all desired sub-mappers. When specied, it provides the selectable to be used for polymorphic loading. When with_polymorphic includes mappers which load from a concrete inheriting table, the <selectable> argument is required, since it usually requires more complex UNION queries. sqlalchemy.orm.object_mapper(instance) Given an object, return the primary Mapper associated with the object instance. Raises UnmappedInstanceError if no mapping is congured. sqlalchemy.orm.class_mapper(class_, compile=True) Given a class, return the primary Mapper associated with the key. Raises UnmappedClassError if no mapping is congured on the given class, or ArgumentError if a non-class object is passed. sqlalchemy.orm.compile_mappers() Initialize the inter-mapper relationships of all mappers that have been dened. Deprecated since version 0.7: compile_mappers() is renamed to configure_mappers() sqlalchemy.orm.configure_mappers() Initialize the inter-mapper relationships of all mappers that have been constructed thus far. This function can be called any number of times, but in most cases is handled internally. sqlalchemy.orm.clear_mappers() Remove all mappers from all classes. This function removes all instrumentation from classes and disposes of their associated mappers. Once called, the classes are unmapped and can be later re-mapped with new mappers. clear_mappers() is not for normal use, as there is literally no valid usage for it outside of very specic testing scenarios. Normally, mappers are permanent structural components of user-dened classes, and are never discarded independently of their class. If a mapped class itself is garbage collected, its mapper is automatically disposed of as well. As such, clear_mappers() is only for usage in test suites that re-use the same classes with different mappings, which is itself an extremely rare use case - the only such use case is in fact SQLAlchemys own test suite, and possibly the test suites of other ORM extension libraries which intend to test various combinations of mapper construction upon a xed set of classes. sqlalchemy.orm.util.identity_key(*args, **kwargs) Get an identity key. Valid call signatures: identity_key(class, ident)
62
class mapped class (must be a positional argument) ident primary key, if the key is composite this is a tuple identity_key(instance=instance) instance object instance (must be given as a keyword arg) identity_key(class, row=row) class mapped class (must be a positional argument) row result proxy row (must be given as a keyword arg) sqlalchemy.orm.util.polymorphic_union(table_map, typecolname, cast_nulls=True) Create a UNION statement used by a polymorphic mapper. See Concrete Table Inheritance for an example of how this is used. Parameters table_map mapping of polymorphic identities to Table objects. typecolname string name of a discriminator column, which will be derived from the query, producing the polymorphic identity for each row. If None, no polymorphic discriminator is generated. aliasname name of the alias() construct generated. cast_nulls if True, non-existent columns, which are represented as labeled NULLs, will be passed into CAST. This is a legacy behavior that is problematic on some backends such as Oracle - in which case it can be set to False. class sqlalchemy.orm.mapper.Mapper(class_, local_table, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_null_pks=None, allow_partial_pks=True, batch=True, column_prex=None, include_properties=None, exclude_properties=None, passive_updates=True, eager_defaults=False, _compiled_cache_size=100) Dene the correlation of class attributes to database table columns. Instances of this class should be constructed via the mapper() function. __init__(class_, local_table, properties=None, primary_key=None, non_primary=False, inherits=None, inherit_condition=None, inherit_foreign_keys=None, extension=None, order_by=False, always_refresh=False, version_id_col=None, version_id_generator=None, polymorphic_on=None, _polymorphic_map=None, polymorphic_identity=None, concrete=False, with_polymorphic=None, allow_null_pks=None, allow_partial_pks=True, batch=True, column_prex=None, include_properties=None, exclude_properties=None, passive_updates=True, eager_defaults=False, _compiled_cache_size=100) Construct a new mapper. Mappers are normally constructed via the mapper() function. See for details. add_properties(dict_of_properties) Add the given dictionary of properties to this mapper, using add_property. aliasname=p_union,
63
add_property(key, prop) Add an individual MapperProperty to this mapper. If the mapper has not been congured yet, just adds the property to the initial properties dictionary sent to the constructor. If this Mapper has already been congured, then the given MapperProperty is congured immediately. base_mapper The base-most Mapper in an inheritance chain. In a non-inheriting scenario, this attribute will always be this Mapper. In an inheritance scenario, it references the Mapper which is parent to all other Mapper objects in the inheritance chain. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. c A synonym for columns. cascade_iterator(type_, state, halt_on=None) Iterate each element and its mapper in an object graph, for all relationships that meet the given cascade rule. Parameters type The name of the cascade rule (i.e. save-update, delete, etc.) state The lead InstanceState. child items will be processed per the relationships dened for this objects mapper. the return value are object instances; this provides a strong reference so that they dont fall out of scope immediately. class_ The Python class which this Mapper maps. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. class_manager The ClassManager which maintains event listeners and class-bound descriptors for this Mapper. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. columns A collection of Column or other scalar expression objects maintained by this Mapper. The collection behaves the same as that of the c attribute on any Table object, except that only those columns included in this mapping are present, and are keyed based on the attribute name dened in the mapping, not necessarily the key attribute of the Column itself. Additionally, scalar expressions mapped by column_property() are also present here. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. common_parent(other) Return true if the given mapper shares a common inherited parent as this mapper. compile() Initialize the inter-mapper relationships of all mappers that Deprecated since version 0.7: Mapper.compile() is replaced by configure_mappers() have been constructed thus far.
64
compiled Deprecated since version 0.7: Mapper.compiled is replaced by Mapper.configured concrete Represent True if this Mapper is a concrete inheritance mapper. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. configured Represent True if this Mapper has been congured. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. See also configure_mappers(). get_property(key, _compile_mappers=True) return a MapperProperty associated with the given key. get_property_by_column(column) Given a Column object, return the MapperProperty which maps this column. identity_key_from_instance(instance) Return the identity key for the given instance, based on its primary key attributes. This value is typically also found on the instance state under the attribute name key. identity_key_from_primary_key(primary_key) Return an identity-map key for use in storing/retrieving an item from an identity map. primary_key A list of values indicating the identier. identity_key_from_row(row, adapter=None) Return an identity-map key for use in storing/retrieving an item from the identity map. row A sqlalchemy.engine.base.RowProxy instance or a dictionary corresponding result-set ColumnElement instances to their values within a row. inherits References the Mapper which this Mapper inherits from, if any. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. isa(other) Return True if the this mapper inherits from the given mapper. iterate_properties return an iterator of all MapperProperty objects. local_table The Selectable which this Mapper manages. Typically is an instance of Table or Alias. May also be None. The local table is the selectable that the Mapper is directly responsible for managing from an attribute access and ush perspective. For non-inheriting mappers, the local table is the same as the mapped table. For joined-table inheritance mappers, local_table will be the particular sub-table of the overall join which this Mapper represents. If this mapper is a single-table inheriting mapper, local_table will be None. See also mapped_table.
65
mapped_table The Selectable to which this Mapper is mapped. Typically an instance of Table, Join, or Alias. The mapped table is the selectable that the mapper selects from during queries. For non-inheriting mappers, the mapped table is the same as the local table. For joined-table inheritance mappers, mapped_table references the full Join representing full rows for this particular subclass. For single-table inheritance mappers, mapped_table references the base table. See also local_table. non_primary Represent True if this Mapper is a non-primary mapper, e.g. a mapper that is used only to selet rows but not for persistence management. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. polymorphic_identity Represent an identier which is matched against the polymorphic_on column during result row loading. Used only with inheritance, this object can be of any type which is comparable to the type of column represented by polymorphic_on. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. polymorphic_iterator() Iterate through the collection including this mapper and all descendant mappers. This includes not just the immediately inheriting mappers but all their inheriting mappers as well. To iterate through an entire hierarchy, use mapper.base_mapper.polymorphic_iterator(). polymorphic_map A mapping of polymorphic identity identiers mapped to Mapper instances, within an inheritance scenario. The identiers can be of any type which is comparable to the type of column represented by polymorphic_on. An inheritance chain of mappers will all reference the same polymorphic map object. The object is used to correlate incoming result rows to target mappers. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. polymorphic_on The Column specied as the polymorphic_on column for this Mapper, within an inheritance scenario. This attribute may also be of other types besides Column in a future SQLAlchemy release. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. primary_key An iterable containing the collection of Column objects which comprise the primary key of the mapped table, from the perspective of this Mapper.
66
This list is against the selectable in mapped_table. In the case of inheriting mappers, some columns may be managed by a superclass mapper. For example, in the case of a Join, the primary key is determined by all of the primary key columns across all tables referenced by the Join. The list is also not necessarily the same as the primary key column collection associated with the underlying tables; the Mapper features a primary_key argument that can override what the Mapper considers as primary key columns. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. primary_key_from_instance(instance) Return the list of primary key values for the given instance. primary_mapper() Return the primary mapper corresponding to this mappers class key (class). self_and_descendants The collection including this mapper and all descendant mappers. This includes not just the immediately inheriting mappers but all their inheriting mappers as well. single Represent True if this Mapper is a single table inheritance mapper. local_table will be None if this ag is set. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. tables An iterable containing the collection of Table objects which this Mapper is aware of. If the mapper is mapped to a Join, or an Alias representing a Select, the individual Table objects that comprise the full construct will be represented here. This is a read only attribute determined during mapper construction. Behavior is undened if directly modied. validators An immutable dictionary of attributes which have been decorated using the validates() decorator. The dictionary contains string attribute names as keys mapped to the actual validation method.
67
Base = declarative_base() One To Many A one to many relationship places a foreign key on the child table referencing the parent. relationship() is then specied on the parent, as referencing a collection of items represented by the child: class Parent(Base): __tablename__ = parent id = Column(Integer, primary_key=True) children = relationship("Child") class Child(Base): __tablename__ = child id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey(parent.id)) To establish a bidirectional relationship in one-to-many, where the reverse side is a many to one, specify the backref option: class Parent(Base): __tablename__ = parent id = Column(Integer, primary_key=True) children = relationship("Child", backref="parent") class Child(Base): __tablename__ = child id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey(parent.id)) Child will get a parent attribute with many-to-one semantics. Many To One Many to one places a foreign key in the parent table referencing the child. relationship() is declared on the parent, where a new scalar-holding attribute will be created: class Parent(Base): __tablename__ = parent id = Column(Integer, primary_key=True) child_id = Column(Integer, ForeignKey(child.id)) child = relationship("Child") class Child(Base): __tablename__ = child id = Column(Integer, primary_key=True) Bidirectional behavior is achieved by specifying backref="parents", which will place a one-to-many collection on the Child class: class Parent(Base): __tablename__ = parent id = Column(Integer, primary_key=True) child_id = Column(Integer, ForeignKey(child.id)) child = relationship("Child", backref="parents")
68
One To One One To One is essentially a bidirectional relationship with a scalar attribute on both sides. To achieve this, the uselist=False ag indicates the placement of a scalar attribute instead of a collection on the many side of the relationship. To convert one-to-many into one-to-one: class Parent(Base): __tablename__ = parent id = Column(Integer, primary_key=True) child = relationship("Child", uselist=False, backref="parent") class Child(Base): __tablename__ = child id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey(parent.id)) Or to turn a one-to-many backref into one-to-one, use the backref() function to provide arguments for the reverse side: class Parent(Base): __tablename__ = parent id = Column(Integer, primary_key=True) child_id = Column(Integer, ForeignKey(child.id)) child = relationship("Child", backref=backref("parent", uselist=False)) class Child(Base): __tablename__ = child id = Column(Integer, primary_key=True) Many To Many Many to Many adds an association table between two classes. The association table is indicated by the secondary argument to relationship(). Usually, the Table uses the MetaData object associated with the declarative base class, so that the ForeignKey directives can locate the remote tables with which to link: association_table = Table(association, Base.metadata, Column(left_id, Integer, ForeignKey(left.id)), Column(right_id, Integer, ForeignKey(right.id)) ) class Parent(Base): __tablename__ = left id = Column(Integer, primary_key=True) children = relationship("Child", secondary=association_table) class Child(Base): __tablename__ = right id = Column(Integer, primary_key=True) For a bidirectional relationship, both sides of the relationship contain a collection. The backref keyword will automatically use the same secondary argument for the reverse relationship: association_table = Table(association, Base.metadata, Column(left_id, Integer, ForeignKey(left.id)), Column(right_id, Integer, ForeignKey(right.id))
69
) class Parent(Base): __tablename__ = left id = Column(Integer, primary_key=True) children = relationship("Child", secondary=association_table, backref="parents") class Child(Base): __tablename__ = right id = Column(Integer, primary_key=True) The secondary argument of relationship() also accepts a callable that returns the ultimate argument, which is evaluated only when mappers are rst used. Using this, we can dene the association_table at a later point, as long as its available to the callable after all module initialization is complete: class Parent(Base): __tablename__ = left id = Column(Integer, primary_key=True) children = relationship("Child", secondary=lambda: association_table, backref="parents") Association Object The association object pattern is a variant on many-to-many: it specically is used when your association table contains additional columns beyond those which are foreign keys to the left and right tables. Instead of using the secondary argument, you map a new class directly to the association table. The left side of the relationship references the association object via one-to-many, and the association class references the right side via many-to-one: class Association(Base): __tablename__ = association left_id = Column(Integer, ForeignKey(left.id), primary_key=True) right_id = Column(Integer, ForeignKey(right.id), primary_key=True) child = relationship("Child") class Parent(Base): __tablename__ = left id = Column(Integer, primary_key=True) children = relationship("Association") class Child(Base): __tablename__ = right id = Column(Integer, primary_key=True) The bidirectional version adds backrefs to both relationships: class Association(Base): __tablename__ = association left_id = Column(Integer, ForeignKey(left.id), primary_key=True) right_id = Column(Integer, ForeignKey(right.id), primary_key=True) child = relationship("Child", backref="parent_assocs") class Parent(Base): __tablename__ = left 70 Chapter 2. SQLAlchemy ORM
id = Column(Integer, primary_key=True) children = relationship("Association", backref="parent") class Child(Base): __tablename__ = right id = Column(Integer, primary_key=True) Working with the association pattern in its direct form requires that child objects are associated with an association instance before being appended to the parent; similarly, access from parent to child goes through the association object: # create parent, append a child via association p = Parent() a = Association() a.child = Child() p.children.append(a) # iterate through child objects via association, including association # attributes for assoc in p.children: print assoc.data print assoc.child To enhance the association object pattern such that direct access to the Association object is optional, SQLAlchemy provides the Association Proxy extension. This extension allows the conguration of attributes which will access two hops with a single access, one hop to the associated object, and a second to a target attribute. Note: When using the association object pattern, it is advisable that the association-mapped table not be used as the secondary argument on a relationship() elsewhere, unless that relationship() contains the option viewonly=True. SQLAlchemy otherwise may attempt to emit redundant INSERT and DELETE statements on the same table, if similar state is detected on the related attribute as well as the associated object.
71
id --1 2 3 4 5 6
parent_id ------NULL 1 1 3 3 1
The relationship() conguration here works in the same way as a normal one-to-many relationship, with the exception that the direction, i.e. whether the relationship is one-to-many or many-to-one, is assumed by default to be one-to-many. To establish the relationship as many-to-one, an extra directive is added known as remote_side, which is a Column or collection of Column objects that indicate those which should be considered to be remote: class Node(Base): __tablename__ = node id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey(node.id)) data = Column(String(50)) parent = relationship("Node", remote_side=[id]) Where above, the id column is applied as the remote_side of the parent relationship(), thus establishing parent_id as the local side, and the relationship then behaves as a many-to-one. As always, both directions can be combined into a bidirectional relationship using the backref() function: class Node(Base): __tablename__ = node id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey(node.id)) data = Column(String(50)) children = relationship("Node", backref=backref(parent, remote_side=[id]) ) There are several examples included with SQLAlchemy illustrating self-referential strategies; these include Adjacency List and XML Persistence. Self-Referential Query Strategies Querying of self-referential structures works like any other query: # get all nodes named child2 session.query(Node).filter(Node.data==child2) However extra care is needed when attempting to join along the foreign key from one level of the tree to the next. In SQL, a join from a table to itself requires that at least one side of the expression be aliased so that it can be unambiguously referred to. Recall from Using Aliases in the ORM tutorial that the orm.aliased construct is normally used to provide an alias of an ORM entity. Joining from Node to itself using this technique looks like: from sqlalchemy.orm import aliased nodealias = aliased(Node) session.query(Node).filter(Node.data==subchild1).\ join(nodealias, Node.parent).\ filter(nodealias.data=="child2").\
72
all() SELECT node.id AS node_id, node.parent_id AS node_parent_id, node.data AS node_data FROM node JOIN node AS node_1 ON node.parent_id = node_1.id WHERE node.data = ? AND node_1.data = ? [subchild1, child2] Query.join() also includes a feature known as aliased=True that can shorten the verbosity self-referential joins, at the expense of query exibility. This feature performs a similar aliasing step to that above, without the need for an explicit entity. Calls to Query.filter() and similar subsequent to the aliased join will adapt the Node entity to be that of the alias: session.query(Node).filter(Node.data==subchild1).\ join(Node.parent, aliased=True).\ filter(Node.data==child2).\ all() SELECT node.id AS node_id, node.parent_id AS node_parent_id, node.data AS node_data FROM node JOIN node AS node_1 ON node_1.id = node.parent_id WHERE node.data = ? AND node_1.data = ? [subchild1, child2] To add criterion to multiple points along a longer join, add from_joinpoint=True to the additional join() calls: # get all nodes named subchild1 with a # parent named child2 and a grandparent root session.query(Node).\ filter(Node.data==subchild1).\ join(Node.parent, aliased=True).\ filter(Node.data==child2).\ join(Node.parent, aliased=True, from_joinpoint=True).\ filter(Node.data==root).\ all() SELECT node.id AS node_id, node.parent_id AS node_parent_id, node.data AS node_data FROM node JOIN node AS node_1 ON node_1.id = node.parent_id JOIN node AS node_2 ON node_2.id = node_1.parent_id WHERE node.data = ? AND node_1.data = ? AND node_2.data = ? [subchild1, child2, root] Query.reset_joinpoint() will also remove the aliasing from ltering calls: session.query(Node).\ join(Node.children, aliased=True).\ filter(Node.data == foo).\ reset_joinpoint().\ filter(Node.data == bar)
73
For an example of using aliased=True to arbitrarily join along a chain of self-referential nodes, see XML Persistence. Conguring Self-Referential Eager Loading Eager loading of relationships occurs using joins or outerjoins from parent to child table during a normal query operation, such that the parent and its immediate child collection or reference can be populated from a single SQL statement, or a second statement for all immediate child collections. SQLAlchemys joined and subquery eager loading use aliased tables in all cases when joining to related items, so are compatible with self-referential joining. However, to use eager loading with a self-referential relationship, SQLAlchemy needs to be told how many levels deep it should join and/or query; otherwise the eager load will not take place at all. This depth setting is congured via join_depth: class Node(Base): __tablename__ = node id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey(node.id)) data = Column(String(50)) children = relationship("Node", lazy="joined", join_depth=2) session.query(Node).all() SELECT node_1.id AS node_1_id, node_1.parent_id AS node_1_parent_id, node_1.data AS node_1_data, node_2.id AS node_2_id, node_2.parent_id AS node_2_parent_id, node_2.data AS node_2_data, node.id AS node_id, node.parent_id AS node_parent_id, node.data AS node_data FROM node LEFT OUTER JOIN node AS node_2 ON node.id = node_2.parent_id LEFT OUTER JOIN node AS node_1 ON node_2.id = node_1.parent_id []
74
addresses = relationship("Address", backref="user") class Address(Base): __tablename__ = address id = Column(Integer, primary_key=True) email = Column(String) user_id = Column(Integer, ForeignKey(user.id)) The above conguration establishes a collection of Address objects on User called User.addresses. It also establishes a .user attribute on Address which will refer to the parent User object. In fact, the backref keyword is only a common shortcut for placing a second relationship onto the Address mapping, including the establishment of an event listener on both sides which will mirror attribute operations in both directions. The above conguration is equivalent to: from sqlalchemy import Integer, ForeignKey, String, Column from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship Base = declarative_base() class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String) addresses = relationship("Address", back_populates="user") class Address(Base): __tablename__ = address id = Column(Integer, primary_key=True) email = Column(String) user_id = Column(Integer, ForeignKey(user.id)) user = relationship("User", back_populates="addresses") Above, we add a .user relationship to Address explicitly. On both relationships, the back_populates directive tells each relationship about the other one, indicating that they should establish bidirectional behavior between each other. The primary effect of this conguration is that the relationship adds event handlers to both attributes which have the behavior of when an append or set event occurs here, set ourselves onto the incoming attribute using this particular attribute name. The behavior is illustrated as follows. Start with a User and an Address instance. The .addresses collection is empty, and the .user attribute is None: >>> u1 = User() >>> a1 = Address() >>> u1.addresses [] >>> print a1.user None However, once the Address is appended to the u1.addresses collection, both the collection and the scalar attribute have been populated: >>> u1.addresses.append(a1) >>> u1.addresses [<__main__.Address object at 0x12a6ed0>]
75
>>> a1.user <__main__.User object at 0x12a6590> This behavior of course works in reverse for removal operations as well, as well as for equivalent operations on both sides. Such as when .user is set again to None, the Address object is removed from the reverse collection: >>> a1.user = None >>> u1.addresses [] The manipulation of the .addresses collection and the .user attribute occurs entirely in Python without any interaction with the SQL database. Without this behavior, the proper state would be apparent on both sides once the data has been ushed to the database, and later reloaded after a commit or expiration operation occurs. The backref/back_populates behavior has the advantage that common bidirectional operations can reect the correct state without requiring a database round trip. Remember, when the backref keyword is used on a single relationship, its exactly the same as if the above two relationships were created individually using back_populates on each. Backref Arguments Weve established that the backref keyword is merely a shortcut for building two individual relationship() constructs that refer to each other. Part of the behavior of this shortcut is that certain congurational arguments applied to the relationship() will also be applied to the other direction - namely those arguments that describe the relationship at a schema level, and are unlikely to be different in the reverse direction. The usual case here is a many-to-many relationship() that has a secondary argument, or a one-to-many or many-to-one which has a primaryjoin argument (the primaryjoin argument is discussed in Specifying Alternate Join Conditions to relationship()). Such as if we limited the list of Address objects to those which start with tony: from sqlalchemy import Integer, ForeignKey, String, Column from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship Base = declarative_base() class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String) addresses = relationship("Address", primaryjoin="and_(User.id==Address.user_id, " "Address.email.startswith(tony))", backref="user") class Address(Base): __tablename__ = address id = Column(Integer, primary_key=True) email = Column(String) user_id = Column(Integer, ForeignKey(user.id)) We can observe, by inspecting the resulting property, that both sides of the relationship have this join condition applied: >>> print User.addresses.property.primaryjoin "user".id = address.user_id AND address.email LIKE :email_1 || %% >>> >>> print Address.user.property.primaryjoin 76 Chapter 2. SQLAlchemy ORM
"user".id = address.user_id AND address.email LIKE :email_1 || %% >>> This reuse of arguments should pretty much do the right thing - it uses only arguments that are applicable, and in the case of a many-to-many relationship, will reverse the usage of primaryjoin and secondaryjoin to correspond to the other direction (see the example in Self-Referential Many-to-Many Relationship for this). Its very often the case however that wed like to specify arguments that are specic to just the side where we happened to place the backref. This includes relationship() arguments like lazy, remote_side, cascade and cascade_backrefs. For this case we use the backref() function in place of a string: # <other imports> from sqlalchemy.orm import backref class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String) addresses = relationship("Address", backref=backref("user", lazy="joined")) Where above, we placed a lazy="joined" directive only on the Address.user side, indicating that when a query against Address is made, a join to the User entity should be made automatically which will populate the .user attribute of each returned Address. The backref() function formatted the arguments we gave it into a form that is interpreted by the receiving relationship() as additional arguments to be applied to the new relationship it creates. One Way Backrefs An unusual case is that of the one way backref. This is where the back-populating behavior of the backref is only desirable in one direction. An example of this is a collection which contains a ltering primaryjoin condition. Wed like to append items to this collection as needed, and have them populate the parent object on the incoming object. However, wed also like to have items that are not part of the collection, but still have the same parent association - these items should never be in the collection. Taking our previous example, where we established a primaryjoin that limited the collection only to Address objects whose email address started with the word tony, the usual backref behavior is that all items populate in both directions. We wouldnt want this behavior for a case like the following: >>> u1 = User() >>> a1 = Address(email=mary) >>> a1.user = u1 >>> u1.addresses [<__main__.Address object at 0x1411910>] Above, the Address object that doesnt match the criterion of starts with tony is present in the addresses collection of u1. After these objects are ushed, the transaction committed and their attributes expired for a re-load, the addresses collection will hit the database on next access and no longer have this Address object present, due to the ltering condition. But we can do away with this unwanted side of the backref behavior on the Python side by using two separate relationship() constructs, placing back_populates only on one side: from sqlalchemy import Integer, ForeignKey, String, Column from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship Base = declarative_base()
77
class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String) addresses = relationship("Address", primaryjoin="and_(User.id==Address.user_id, " "Address.email.startswith(tony))", back_populates="user") class Address(Base): __tablename__ = address id = Column(Integer, primary_key=True) email = Column(String) user_id = Column(Integer, ForeignKey(user.id)) user = relationship("User") With the above scenario, appending an Address object to the .addresses collection of a User will always establish the .user attribute on that Address: >>> u1 = User() >>> a1 = Address(email=tony) >>> u1.addresses.append(a1) >>> a1.user <__main__.User object at 0x1411850> However, applying a User to the .user attribute of an Address, will not append the Address object to the collection: >>> a2 = Address(email=mary) >>> a2.user = u1 >>> a2 in u1.addresses False Of course, weve disabled some of the usefulness of backref here, in that when we do append an Address that corresponds to the criteria of email.startswith(tony), it wont show up in the User.addresses collection until the session is ushed, and the attributes reloaded after a commit or expire operation. While we could consider an attribute event that checks this criterion in Python, this starts to cross the line of duplicating too much SQL behavior in Python. The backref behavior itself is only a slight transgression of this philosophy - SQLAlchemy tries to keep these to a minimum overall.
78
parent_table -->
If you are working with a Table which has no ForeignKey metadata established (which can be the case when using reected tables with MySQL), or if the join condition cannot be expressed by a simple foreign key relationship, use the primaryjoin, and for many-to-many relationships secondaryjoin, directives to create the appropriate relationship. In this example, using the User class as well as an Address class which stores a street address, we create a relationship boston_addresses which will only load those Address objects which specify a city of Boston: from sqlalchemy import Integer, ForeignKey, String, Column from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship Base = declarative_base() class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String) addresses = relationship("Address", primaryjoin="and_(User.id==Address.user_id, " "Address.city==Boston)") class Address(Base): __tablename__ = address id = Column(Integer, primary_key=True) user_id = Column(Integer, ForeignKey(user.id)) street = Column(String) city = Column(String) state = Column(String) zip = Column(String) Note above we specied the primaryjoin argument as a string - this feature is available only when the mapping is constructed using the Declarative extension, and allows us to specify a full SQL expression between two entities before those entities have been fully constructed. When all mappings have been dened, an automatic mapper conguration step interprets these string arguments when rst needed. Within this string SQL expression, we also made usage of the and_() conjunction construct to establish two distinct predicates for the join condition - joining both the User.id and Address.user_id columns to each other, as well as limiting rows in Address to just city=Boston. When using Declarative, rudimentary SQL functions like and_() are automatically available in the evaulated namespace of a string relationship() argument. When using classical mappings, we have the advantage of the Table objects already being present when the mapping is dened, so that the SQL expression can be created immediately: from sqlalchemy.orm import relationship, mapper class User(object): pass class Address(object): pass
79
mapper(Address, addresses_table) mapper(User, users_table, properties={ boston_addresses: relationship(Address, primaryjoin= and_(users_table.c.id==addresses_table.c.user_id, addresses_table.c.city==Boston)) }) Note that the custom criteria we use in a primaryjoin is generally only signicant when SQLAlchemy is rendering SQL in order to load or represent this relationship. That is, its used in the SQL statement thats emitted in order to perform a per-attribute lazy load, or when a join is constructed at query time, such as via Query.join(), or via the eager joined or subquery styles of loading. When in-memory objects are being manipulated, we can place any Address object wed like into the boston_addresses collection, regardless of what the value of the .city attribute is. The objects will remain present in the collection until the attribute is expired and re-loaded from the database where the criterion is applied. When a ush occurs, the objects inside of boston_addresses will be ushed unconditionally, assigning value of the primary key user.id column onto the foreign-key-holding address.user_id column for each row. The city criteria has no effect here, as the ush process only cares about synchronizing primary key values into referencing foreign key values. Self-Referential Many-to-Many Relationship Many to many relationships can be customized by one or both of primaryjoin and secondaryjoin. A common situation for custom primary and secondary joins is when establishing a many-to-many relationship from a class to itself, as shown below: from sqlalchemy import Integer, ForeignKey, String, Column, Table from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship Base = declarative_base() node_to_node = Table("node_to_node", Base.metadata, Column("left_node_id", Integer, ForeignKey("node.id"), primary_key=True), Column("right_node_id", Integer, ForeignKey("node.id"), primary_key=True) ) class Node(Base): __tablename__ = node id = Column(Integer, primary_key=True) label = Column(String) right_nodes = relationship("Node", secondary=node_to_node, primaryjoin=id==node_to_node.c.left_node_id, secondaryjoin=id==node_to_node.c.right_node_id, backref="left_nodes" ) Where above, SQLAlchemy cant know automatically which columns should connect to which for the right_nodes and left_nodes relationships. The primaryjoin and secondaryjoin arguments establish how wed like to join to the association table. In the Declarative form above, as we are declaring these conditions within the Python block that corresponds to the Node class, the id variable is available directly as the Column object we wish to join with. A classical mapping situation here is similar, where node_to_node can be joined to node.c.id:
80
from sqlalchemy import Integer, ForeignKey, String, Column, Table, MetaData from sqlalchemy.orm import relationship, mapper metadata = MetaData() node_to_node = Table("node_to_node", metadata, Column("left_node_id", Integer, ForeignKey("node.id"), primary_key=True), Column("right_node_id", Integer, ForeignKey("node.id"), primary_key=True) ) node = Table("node", metadata, Column(id, Integer, primary_key=True), Column(label, String) ) class Node(object): pass mapper(Node, node, properties={ right_nodes:relationship(Node, secondary=node_to_node, primaryjoin=node.c.id==node_to_node.c.left_node_id, secondaryjoin=node.c.id==node_to_node.c.right_node_id, backref="left_nodes" )}) Note that in both examples, the backref keyword species a left_nodes backref - when relationship() creates the second relationship in the reverse direction, its smart enough to reverse the primaryjoin and secondaryjoin arguments. Specifying Foreign Keys When using primaryjoin and secondaryjoin, SQLAlchemy also needs to be aware of which columns in the relationship reference the other. In most cases, a Table construct will have ForeignKey constructs which take care of this; however, in the case of reected tables on a database that does not report FKs (like MySQL ISAM) or when using join conditions on columns that dont have foreign keys, the relationship() needs to be told specically which columns are foreign using the foreign_keys collection: mapper(Address, addresses_table) mapper(User, users_table, properties={ addresses: relationship(Address, primaryjoin= users_table.c.user_id==addresses_table.c.user_id, foreign_keys=[addresses_table.c.user_id]) }) Building Query-Enabled Properties Very ambitious custom join conditions may fail to be directly persistable, and in some cases may not even load correctly. To remove the persistence part of the equation, use the ag viewonly=True on the relationship(), which establishes it as a read-only attribute (data written to the collection will be ignored on ush()). However, in extreme cases, consider using a regular Python property in conjunction with Query as follows: class User(object): def _get_addresses(self):
81
return object_session(self).query(Address).with_parent(self).filter(...).all() addresses = property(_get_addresses) Multiple Relationships against the Same Parent/Child Theres no restriction on how many times you can relate from parent to child. SQLAlchemy can usually gure out what you want, particularly if the join conditions are straightforward. Below we add a newyork_addresses attribute to complement the boston_addresses attribute: mapper(User, users_table, properties={ boston_addresses: relationship(Address, primaryjoin= and_(users_table.c.user_id==addresses_table.c.user_id, addresses_table.c.city==Boston)), newyork_addresses: relationship(Address, primaryjoin= and_(users_table.c.user_id==addresses_table.c.user_id, addresses_table.c.city==New York)), })
In the rst case, a row points to itself. Technically, a database that uses sequences such as PostgreSQL or Oracle can INSERT the row at once using a previously generated value, but databases which rely upon autoincrement-style primary key identiers cannot. The relationship() always assumes a parent/child model of row population during ush, so unless you are populating the primary key/foreign key columns directly, relationship() needs to use two statements. In the second case, the widget row must be inserted before any referring entry rows, but then the favorite_entry_id column of that widget row cannot be set until the entry rows have been generated. In this case, its typically impossible to insert the widget and entry rows using just two INSERT statements; an UPDATE must be performed in order to keep foreign key constraints fullled. The exception is if the foreign keys are congured as deferred until commit (a feature some databases support) and if the identiers were populated manually (again essentially bypassing relationship()).
82
To enable the UPDATE after INSERT / UPDATE before DELETE behavior on relationship(), use the post_update ag on one of the relationships, preferably the many-to-one side:
mapper(Widget, widget, properties={ entries:relationship(Entry, primaryjoin=widget.c.widget_id==entry.c.widget_id), favorite_entry:relationship(Entry, primaryjoin=widget.c.favorite_entry_id==entry.c.en }) When a structure using the above mapping is ushed, the widget row will be INSERTed minus the favorite_entry_id value, then all the entry rows will be INSERTed referencing the parent widget row, and then an UPDATE statement will populate the favorite_entry_id column of the widget table (its one row at a time for the time being).
83
unit of work searches only through the current identity map for objects that may be referencing the one with a mutating primary key, not throughout the database.
84
secondary for a many-to-many relationship, species the intermediary table, and is an instance of Table. The secondary keyword argument should generally only be used for a table that is not otherwise expressed in any class mapping, unless this relationship is declared as view only, otherwise conicting persistence operations can occur. secondary may also be passed as a callable function which is evaluated at mapper initialization time. active_history=False When True, indicates that the previous value for a many-toone reference should be loaded when replaced, if not already loaded. Normally, history tracking logic for simple many-to-ones only needs to be aware of the new value in order to perform a ush. This ag is available for applications that make use of attributes.get_history() which also need to know the previous value of the attribute. backref indicates the string name of a property to be placed on the related mappers class that will handle this relationship in the other direction. The other property will be created automatically when the mappers are congured. Can also be passed as a backref() object to control the conguration of the new relationship. back_populates Takes a string name and has the same meaning as backref, except the complementing property is not created automatically, and instead must be congured explicitly on the other mapper. The complementing property should also indicate back_populates to this relationship to ensure proper functioning. cascade a comma-separated list of cascade rules which determines how Session operations should be cascaded from parent to child. This defaults to False, which means the default cascade should be used. The default value is "save-update, merge". Available cascades are: save-update - cascade the Session.add() operation. This cascade applies both to future and past calls to add(), meaning new items added to a collection or scalar relationship get placed into the same session as that of the parent, and also applies to items which have been removed from this relationship but are still part of unushed history. merge - cascade the merge() operation expunge - cascade the Session.expunge() operation delete - cascade the Session.delete() operation delete-orphan - if an item of the childs type with no parent is detected, mark it for deletion. Note that this option prevents a pending item of the childs class from being persisted without a parent present. refresh-expire - cascade the Session.expire() and refresh() operations all - shorthand for save-update,merge, refresh-expire, expunge, delete cascade_backrefs=True a boolean value indicating if the save-update cascade should operate along an assignment event intercepted by a backref. When set to False, the attribute managed by this relationship will not cascade an incoming transient object into the session of a persistent parent, if the event is received via backref. That is: mapper(A, a_table, properties={ bs:relationship(B, backref="a", cascade_backrefs=False) })
85
If an A() is present in the session, assigning it to the a attribute on a transient B() will not place the B() into the session. To set the ag in the other direction, i.e. so that A().bs.append(B()) wont add a transient A() into the session for a persistent B(): mapper(A, a_table, properties={ bs:relationship(B, backref=backref("a", cascade_backrefs=False) ) }) cascade_backrefs is new in 0.6.5. collection_class a class or callable that returns a new list-holding object. will be used in place of a plain list for storing elements. Behavior of this attribute is described in detail at Customizing Collection Access. comparator_factory a class which extends RelationshipProperty.Comparator which provides custom SQL clause generation for comparison operations. doc docstring which will be applied to the resulting descriptor. extension an AttributeExtension instance, or list of extensions, which will be prepended to the list of attribute listeners for the resulting descriptor placed on the class. Deprecated. Please see AttributeEvents. foreign_keys a list of columns which are to be used as foreign key columns. Normally, relationship() uses the ForeignKey and ForeignKeyConstraint objects present within the mapped or secondary Table to determine the foreign side of the join condition. This is used to construct SQL clauses in order to load objects, as well as to synchronize values from primary key columns to referencing foreign key columns. The foreign_keys parameter overrides the notion of whats foreign in the table metadata, allowing the specication of a list of Column objects that should be considered part of the foreign key. There are only two use cases for foreign_keys - one, when it is not convenient for Table metadata to contain its own foreign key metadata (which should be almost never, unless reecting a large amount of tables from a MySQL MyISAM schema, or a schema that doesnt actually have foreign keys on it). The other is for extremely rare and exotic composite foreign key setups where some columns should articially not be considered as foreign. foreign_keys may also be passed as a callable function which is evaluated at mapper initialization time, and may be passed as a Python-evaluable string when using Declarative. innerjoin=False when True, joined eager loads will use an inner join to join against related tables instead of an outer join. The purpose of this option is strictly one of performance, as inner joins generally perform better than outer joins. This ag can be set to True when the relationship references an object via many-to-one using local foreign keys that are not nullable, or when the reference is one-to-one or a collection that is guaranteed to have one or at least one entry. join_depth when non-None, an integer value indicating how many levels deep eager loaders should join on a self-referring or cyclical relationship. The number counts how many times the same Mapper shall be present in the loading condition along a particular join branch. When left at its default of None, eager loaders will stop chaining when they encounter a the same target mapper which is already higher up in the chain. This option applies both to joined- and subquery- eager loaders.
86
lazy=select species how the related items should be loaded. Default value is select. Values include: select - items should be loaded lazily when the property is rst accessed, using a separate SELECT statement, or identity map fetch for simple many-to-one references. immediate - items should be loaded as the parents are loaded, using a separate SELECT statement, or identity map fetch for simple many-to-one references. (new as of 0.6.5) joined - items should be loaded eagerly in the same query as that of the parent, using a JOIN or LEFT OUTER JOIN. Whether the join is outer or not is determined by the innerjoin parameter. subquery - items should be loaded eagerly within the same query as that of the parent, using a second SQL statement which issues a JOIN to a subquery of the original statement. noload - no loading should occur at any time. This is to support write-only attributes, or attributes which are populated in some manner specic to the application. dynamic - the attribute will return a pre-congured Query object for all read operations, onto which further ltering operations can be applied before iterating the results. The dynamic collection supports a limited set of mutation operations, allowing append() and remove(). Changes to the collection will not be visible until ushed to the database, where it is then refetched upon iteration. True - a synonym for select False - a synonyn for joined None - a synonym for noload Detailed discussion of loader strategies is at Relationship Loading Techniques. load_on_pending=False Indicates loading behavior for transient or pending parent objects. When set to True, causes the lazy-loader to issue a query for a parent object that is not persistent, meaning it has never been ushed. This may take effect for a pending object when autoush is disabled, or for a transient object that has been attached to a Session but is not part of its pending collection. Attachment of transient objects to the session without moving to the pending state is not a supported behavior at this time. Note that the load of related objects on a pending or transient object also does not trigger any attribute change events - no user-dened events will be emitted for these attributes, and if and when the object is ultimately ushed, only the user-specic foreign key attributes will be part of the modied state. The load_on_pending ag does not improve behavior when the ORM is used normally object references should be constructed at the object level, not at the foreign key level, so that they are present in an ordinary way before ush() proceeds. This ag is not not intended for general use. New in 0.6.5. order_by indicates the ordering that should be applied when loading these items. order_by is expected to refer to one of the Column objects to which the target class is mapped, or the attribute itself bound to the target class which refers to the column. order_by may also be passed as a callable function which is evaluated at mapper initialization time, and may be passed as a Python-evaluable string when using Declarative.
87
passive_deletes=False Indicates loading behavior during delete operations. A value of True indicates that unloaded child items should not be loaded during a delete operation on the parent. Normally, when a parent item is deleted, all child items are loaded so that they can either be marked as deleted, or have their foreign key to the parent set to NULL. Marking this ag as True usually implies an ON DELETE <CASCADE|SET NULL> rule is in place which will handle updating/deleting child rows on the database side. Additionally, setting the ag to the string value all will disable the nulling out of the child foreign keys, when there is no delete or delete-orphan cascade enabled. This is typically used when a triggering or error raise scenario is in place on the database side. Note that the foreign key attributes on in-session child objects will not be changed after a ush occurs so this is a very special use-case setting. passive_updates=True Indicates loading and INSERT/UPDATE/DELETE behavior when the source of a foreign key value changes (i.e. an on update cascade), which are typically the primary key columns of the source row. When True, it is assumed that ON UPDATE CASCADE is congured on the foreign key in the database, and that the database will handle propagation of an UPDATE from a source column to dependent rows. Note that with databases which enforce referential integrity (i.e. PostgreSQL, MySQL with InnoDB tables), ON UPDATE CASCADE is required for this operation. The relationship() will update the value of the attribute on related items which are locally present in the session during a ush. When False, it is assumed that the database does not enforce referential integrity and will not be issuing its own CASCADE operation for an update. The relationship() will issue the appropriate UPDATE statements to the database in response to the change of a referenced key, and items locally present in the session during a ush will also be refreshed. This ag should probably be set to False if primary key changes are expected and the database in use doesnt support CASCADE (i.e. SQLite, MySQL MyISAM tables). Also see the passive_updates ag on mapper(). A future SQLAlchemy release will provide a detect feature for this ag. post_update this indicates that the relationship should be handled by a second UPDATE statement after an INSERT or before a DELETE. Currently, it also will issue an UPDATE after the instance was UPDATEd as well, although this technically should be improved. This ag is used to handle saving bi-directional dependencies between two individual rows (i.e. each row references the other), where it would otherwise be impossible to INSERT or DELETE both rows fully since one row exists before the other. Use this ag when a particular mapping arrangement will incur two rows that are dependent on each other, such as a table that has a one-to-many relationship to a set of child rows, and also has a column that references a single child row within that list (i.e. both tables contain a foreign key to each other). If a flush() operation returns an error that a cyclical dependency was detected, this is a cue that you might want to use post_update to break the cycle. primaryjoin a SQL expression that will be used as the primary join of this child object against the parent object, or in a many-to-many relationship the join of the primary object to the association table. By default, this value is computed based on the foreign key relationships of the parent and child tables (or association table). primaryjoin may also be passed as a callable function which is evaluated at mapper initialization time, and may be passed as a Python-evaluable string when using Declarative. remote_side used for self-referential relationships, indicates the column or list of columns that form the remote side of the relationship.
88
remote_side may also be passed as a callable function which is evaluated at mapper initialization time, and may be passed as a Python-evaluable string when using Declarative. query_class a Query subclass that will be used as the base of the appender query returned by a dynamic relationship, that is, a relationship that species lazy="dynamic" or was otherwise constructed using the orm.dynamic_loader() function. secondaryjoin a SQL expression that will be used as the join of an association table to the child object. By default, this value is computed based on the foreign key relationships of the association and child tables. secondaryjoin may also be passed as a callable function which is evaluated at mapper initialization time, and may be passed as a Python-evaluable string when using Declarative. single_parent=(True|False) when True, installs a validator which will prevent objects from being associated with more than one parent at a time. This is used for many-to-one or many-to-many relationships that should be treated either as one-to-one or one-to-many. Its usage is optional unless delete-orphan cascade is also set on this relationship(), in which case its required. uselist=(True|False) a boolean that indicates if this property should be loaded as a list or a scalar. In most cases, this value is determined automatically by relationship(), based on the type and direction of the relationship - one to many forms a list, many to one forms a scalar, many to many is a list. If a scalar is desired where normally a list would be present, such as a bi-directional one-to-one relationship, set uselist to False. viewonly=False when set to True, the relationship is used only for loading objects within the relationship, and has no effect on the unit-of-work ush process. Relationships with viewonly can specify any kind of join conditions to provide additional views of related objects onto a parent object. Note that the functionality of a viewonly relationship has its limits - complicated join conditions may not compile into eager or lazy loaders properly. If this is the case, use an alternative method. sqlalchemy.orm.backref(name, **kwargs) Create a back reference with explicit keyword arguments, which are the same arguments one can send to relationship(). Used with the backref keyword argument to relationship() in place of a string argument, e.g.: items:relationship(SomeItem, backref=backref(parent, lazy=subquery)) sqlalchemy.orm.relation(*arg, **kw) A synonym for relationship(). sqlalchemy.orm.dynamic_loader(argument, **kw) Construct a dynamically-loading mapper property. This is essentially the same as using the lazy=dynamic argument with relationship(): dynamic_loader(SomeClass) # vs. relationship(SomeClass, lazy="dynamic") A relationship() that is dynamic features the behavior that read operations return an active Query object which reads from the database when accessed. Items may be appended to the attribute via append(), or removed via remove(); changes will be persisted to the database during a Sesion.flush(). However, no other Python list or collection mutation operations are available.
89
All arguments accepted by relationship() are accepted here, other than lazy which is xed at dynamic.
90
Note: The dynamic_loader() function is essentially the same as relationship() with the lazy=dynamic argument specied. Setting Noload The opposite of the dynamic relationship is simply noload, specied using lazy=noload: mapper(MyClass, table, properties={ children: relationship(MyOtherClass, lazy=noload) }) Above, the children collection is fully writeable, and changes to it will be persisted to the database as well as locally available for reading at the time they are added. However when instances of MyClass are freshly loaded from the database, the children collection stays empty. Using Passive Deletes Use passive_deletes=True to disable child object loading on a DELETE operation, in conjunction with ON DELETE (CASCADE|SET NULL) on your database to automatically cascade deletes to child objects. Note that ON DELETE is not supported on SQLite, and requires InnoDB tables when using MySQL: mytable = Table(mytable, meta, Column(id, Integer, primary_key=True), ) myothertable = Table(myothertable, meta, Column(id, Integer, primary_key=True), Column(parent_id, Integer), ForeignKeyConstraint([parent_id], [mytable.id], ondelete="CASCADE"), ) mapper(MyOtherClass, myothertable)
mapper(MyClass, mytable, properties={ children: relationship(MyOtherClass, cascade="all, delete-orphan", passive_deletes=Tr }) When passive_deletes is applied, the children relationship will not be loaded into memory when an instance of MyClass is marked for deletion. The cascade="all, delete-orphan" will take effect for instances of MyOtherClass which are currently present in the session; however for instances of MyOtherClass which are not loaded, SQLAlchemy assumes that ON DELETE CASCADE rules will ensure that those rows are deleted by the database and that no foreign key violation will occur.
91
parent.children.append(Child()) print parent.children[0] Collections are not limited to lists. Sets, mutable sequences and almost any other Python object that can act as a container can be used in place of the default list, by specifying the collection_class option on relationship(). # use a set mapper(Parent, properties={ children : relationship(Child, collection_class=set) }) parent = Parent() child = Child() parent.children.add(child) assert child in parent.children Dictionary Collections A little extra detail is needed when using a dictionary as a collection. This because objects are always loaded from the database as lists, and a key-generation strategy must be available to populate the dictionary correctly. The orm.collections.attribute_mapped_collection() function is by far the most common way to achieve a simple dictionary collection. It produces a dictionary class that will apply a particular attribute of the mapped class as a key. Below we map an Item class containing a dictionary of Note items keyed to the Note.keyword attribute: from from from from sqlalchemy import Column, Integer, String, ForeignKey sqlalchemy.orm import relationship sqlalchemy.orm.collections import attribute_mapped_collection sqlalchemy.ext.declarative import declarative_base
Base = declarative_base() class Item(Base): __tablename__ = item id = Column(Integer, primary_key=True) notes = relationship("Note", collection_class=attribute_mapped_collection(keyword), cascade="all, delete-orphan") class Note(Base): __tablename__ = note id = Column(Integer, primary_key=True) item_id = Column(Integer, ForeignKey(item.id), nullable=False) keyword = Column(String) text = Column(String) def __init__(self, keyword, text): self.keyword = keyword self.text = text Item.notes is then a dictionary: >>> item = Item() >>> item.notes[a] = Note(a, atext) >>> item.notes.items() {a: <__main__.Note object at 0x2eaaf0>} 92 Chapter 2. SQLAlchemy ORM
orm.collections.attribute_mapped_collection() will ensure that the .keyword attribute of each Note complies with the key in the dictionary. Such as, when assigning to Item.notes, the dictionary key we supply must match that of the actual Note object: item = Item() item.notes = { a: Note(a, atext), b: Note(b, btext) } The attribute which orm.collections.attribute_mapped_collection() uses as a key does not need to be mapped at all ! Using a regular Python @property allows virtually any detail or combination of details about the object to be used as the key, as below when we establish it as a tuple of Note.keyword and the rst ten letters of the Note.text eld: class Item(Base): __tablename__ = item id = Column(Integer, primary_key=True) notes = relationship("Note", collection_class=attribute_mapped_collection(note_key), backref="item", cascade="all, delete-orphan") class Note(Base): __tablename__ = note id = Column(Integer, primary_key=True) item_id = Column(Integer, ForeignKey(item.id), nullable=False) keyword = Column(String) text = Column(String) @property def note_key(self): return (self.keyword, self.text[0:10]) def __init__(self, keyword, text): self.keyword = keyword self.text = text Above we added a Note.item backref. Assigning to this reverse relationship, the Note is added to the Item.notes dictionary and the key is generated for us automatically: >>> item = Item() >>> n1 = Note("a", "atext") >>> n1.item = item >>> item.notes {(a, atext): <__main__.Note object at 0x2eaaf0>} Other built-in dictionary types include orm.collections.column_mapped_collection(), which is almost like attribute_mapped_collection except given the Column object directly: from sqlalchemy.orm.collections import column_mapped_collection class Item(Base): __tablename__ = item id = Column(Integer, primary_key=True) notes = relationship("Note", collection_class=column_mapped_collection(Note.__table__.c.keyword),
93
cascade="all, delete-orphan") as well as orm.collections.mapped_collection() which is passed any callable function. Note that its usually easier to use orm.collections.attribute_mapped_collection() along with a @property as mentioned earlier: from sqlalchemy.orm.collections import mapped_collection class Item(Base): __tablename__ = item id = Column(Integer, primary_key=True) notes = relationship("Note", collection_class=mapped_collection(lambda note: note.text[0:10]), cascade="all, delete-orphan") Dictionary mappings are often combined with the Association Proxy extension to produce streamlined dictionary views. See Proxying to Dictionary Based Collections and Composite Association Proxies for examples. Custom Collection Implementations You can use your own types for collections as well. In simple cases, simply inherting from list or set, adding custom behavior, is all thats needed. In other cases, special decorators are needed to tell SQLAlchemy more detail about how the collection operates. Do I need a custom collection implementation ? In most cases not at all ! The most common use cases for a custom collection is one that validates or marshals incoming values into a new form, such as a string that becomes a class instance, or one which goes a step beyond and represents the data internally in some fashion, presenting a view of that data on the outside of a different form. For the rst use case, the orm.validates() decorator is by far the simplest way to intercept incoming values in all cases for the purposes of validation and simple marshaling. See Simple Validators for an example of this. For the second use case, the Association Proxy extension is a well-tested, widely used system that provides a read/write view of a collection in terms of some attribute present on the target object. As the target attribute can be a @property that returns virtually anything, a wide array of alternative views of a collection can be constructed with just a few functions. This approach leaves the underlying mapped collection unaffected and avoids the need to carefully tailor collection behavior on a method-by-method basis. Customized collections are useful when the collection needs to have special behaviors upon access or mutation operations that cant otherwise be modeled externally to the collection. They can of course be combined with the above two approaches. Collections in SQLAlchemy are transparently instrumented. Instrumentation means that normal operations on the collection are tracked and result in changes being written to the database at ush time. Additionally, collection operations can re events which indicate some secondary operation must take place. Examples of a secondary operation include saving the child item in the parents Session (i.e. the save-update cascade), as well as synchronizing the state of a bi-directional relationship (i.e. a backref). The collections package understands the basic interface of lists, sets and dicts and will automatically apply instrumentation to those built-in types and their subclasses. Object-derived types that implement a basic collection interface are detected and instrumented via duck-typing: class ListLike(object): def __init__(self): self.data = [] def append(self, item):
94
self.data.append(item) def remove(self, item): self.data.remove(item) def extend(self, items): self.data.extend(items) def __iter__(self): return iter(self.data) def foo(self): return foo append, remove, and extend are known list-like methods, and will be instrumented automatically. __iter__ is not a mutator method and wont be instrumented, and foo wont be either. Duck-typing (i.e. guesswork) isnt rock-solid, of course, so you can be explicit about the interface you are implementing by providing an __emulates__ class attribute: class SetLike(object): __emulates__ = set def __init__(self): self.data = set() def append(self, item): self.data.add(item) def remove(self, item): self.data.remove(item) def __iter__(self): return iter(self.data) This class looks list-like because of append, but __emulates__ forces it to set-like. remove is known to be part of the set interface and will be instrumented. But this class wont work quite yet: a little glue is needed to adapt it for use by SQLAlchemy. The ORM needs to know which methods to use to append, remove and iterate over members of the collection. When using a type like list or set, the appropriate methods are well-known and used automatically when present. This set-like class does not provide the expected add method, so we must supply an explicit mapping for the ORM via a decorator.
def __iter__(self): return iter(self.data) And thats all thats needed to complete the example. SQLAlchemy will add instances via the append method. remove and __iter__ are the default methods for sets and will be used for removing and iteration. Default methods can be changed as well: from sqlalchemy.orm.collections import collection class MyList(list): @collection.remover def zark(self, item): # do something special... @collection.iterator def hey_use_this_instead_for_iteration(self): # ... There is no requirement to be list-, or set-like at all. Collection classes can be any shape, so long as they have the append, remove and iterate interface marked for SQLAlchemys use. Append and remove methods will be called with a mapped entity as the single argument, and iterator methods are called with no arguments and must return an iterator.
class NodeMap(OrderedDict, MappedCollection): """Holds Node objects, keyed by the name attribute with insert order maintained.""" def __init__(self, *args, **kw): MappedCollection.__init__(self, keyfunc=lambda node: node.name) OrderedDict.__init__(self, *args, **kw) When subclassing MappedCollection, user-dened versions of __setitem__() or __delitem__() should be decorated with collection.internally_instrumented(), if they call down to those same methods on MappedCollection. This because the methods on MappedCollection are already instrumented - calling them from within an already instrumented call can cause events to be red off repeatedly, or inappropriately, leading to internal state corruption in rare cases: from sqlalchemy.orm.collections import MappedCollection,\ collection class MyMappedCollection(MappedCollection): """Use @internally_instrumented when your methods call down to already-instrumented methods. """ @collection.internally_instrumented def __setitem__(self, key, value, _sa_initiator=None): 96 Chapter 2. SQLAlchemy ORM
# do something with key, value super(MyMappedCollection, self).__setitem__(key, value, _sa_initiator) @collection.internally_instrumented def __delitem__(self, key, _sa_initiator=None): # do something with key super(MyMappedCollection, self).__delitem__(key, _sa_initiator) The ORM understands the dict interface just like lists and sets, and will automatically instrument all dict-like methods if you choose to subclass dict or provide dict-like collection behavior in a duck-typed class. You must decorate appender and remover methods, however- there are no compatible methods in the basic dictionary interface for SQLAlchemy to use by default. Iteration will go through itervalues() unless otherwise decorated.
97
The recipe decorators all require parens, even those that take no arguments: @collection.adds(entity) def insert(self, position, entity): ... @collection.removes_return() def popitem(self): ... Decorators can be specied in long-hand for Python 2.3, or with the class-level dict attribute __instrumentation__- see the source for details. static adds(arg) Mark the method as adding an entity to the collection. Adds add to collection handling to the method. The decorator argument indicates which method argument holds the SQLAlchemy-relevant value. Arguments can be specied positionally (i.e. integer) or by name: @collection.adds(1) def push(self, item): ... @collection.adds(entity) def do_stuff(self, thing, entity=None): ... static appender(fn) Tag the method as the collection appender. The appender method is called with one positional argument: the value to append. The method will be automatically decorated with adds(1) if not already decorated: @collection.appender def add(self, append): ... # or, equivalently @collection.appender @collection.adds(1) def add(self, append): ... # for mapping type, an append may kick out a previous value # that occupies that slot. consider d[a] = foo- any previous # value in d[a] is discarded. @collection.appender @collection.replaces(1) def add(self, entity): key = some_key_func(entity) previous = None if key in self: previous = self[key] self[key] = entity return previous If the value to append is not allowed in the collection, you may raise an exception. Something to remember is that the appender will be called for each object mapped by a database query. If the database contains rows that violate your collection semantics, you will need to get creative to x the problem, as access via the collection will not work.
98
If the appender method is internally instrumented, you must also receive the keyword argument _sa_initiator and ensure its promulgation to collection events. static converter(fn) Tag the method as the collection converter. This optional method will be called when a collection is being replaced entirely, as in: myobj.acollection = [newvalue1, newvalue2] The converter method will receive the object being assigned and should return an iterable of values suitable for use by the appender method. A converter must not assign values or mutate the collection, its sole job is to adapt the value the user provides into an iterable of values for the ORMs use. The default converter implementation will use duck-typing to do the conversion. A dict-like collection will be convert into an iterable of dictionary values, and other types will simply be iterated: @collection.converter def convert(self, other): ... If the duck-typing of the object does not match the type of this collection, a TypeError is raised. Supply an implementation of this method if you want to expand the range of possible types that can be assigned in bulk or perform validation on the values about to be assigned. static internally_instrumented(fn) Tag the method as instrumented. This tag will prevent any decoration from being applied to the method. Use this if you are orchestrating your own calls to collection_adapter() in one of the basic SQLAlchemy interface methods, or to prevent an automatic ABC method decoration from wrapping your implementation: # normally an extend method on a list-like class would be # automatically intercepted and re-implemented in terms of # SQLAlchemy events and append(). your implementation will # never be called, unless: @collection.internally_instrumented def extend(self, items): ... static iterator(fn) Tag the method as the collection remover. The iterator method is called with no arguments. It is expected to return an iterator over all collection members: @collection.iterator def __iter__(self): ... static link(fn) Tag the method as a the linked to attribute event handler. This optional event handler will be called when the collection class is linked to or unlinked from the InstrumentedAttribute. It is invoked immediately after the _sa_adapter property is set on the instance. A single argument is passed: the collection adapter that has been linked, or None if unlinking. static remover(fn) Tag the method as the collection remover.
99
The remover method is called with one positional argument: the value to remove. The method will be automatically decorated with removes_return() if not already decorated: @collection.remover def zap(self, entity): ... # or, equivalently @collection.remover @collection.removes_return() def zap(self, ): ... If the value to remove is not present in the collection, you may raise an exception or return None to ignore the error. If the remove method is internally instrumented, you must also receive the keyword argument _sa_initiator and ensure its promulgation to collection events. static removes(arg) Mark the method as removing an entity in the collection. Adds remove from collection handling to the method. The decorator argument indicates which method argument holds the SQLAlchemy-relevant value to be removed. Arguments can be specied positionally (i.e. integer) or by name: @collection.removes(1) def zap(self, item): ... For methods where the value to remove is not known at call-time, use collection.removes_return. static removes_return() Mark the method as removing an entity in the collection. Adds remove from collection handling to the method. The return value of the method, if any, is considered the value to remove. The method arguments are not inspected: @collection.removes_return() def pop(self): ... For methods where the value to remove is known at call-time, use collection.remove. static replaces(arg) Mark the method as replacing an entity in the collection. Adds add to collection and remove from collection handling to the method. The decorator argument indicates which method argument holds the SQLAlchemy-relevant value to be added, and return value, if any will be considered the value to remove. Arguments can be specied positionally (i.e. integer) or by name: @collection.replaces(2) def __setitem__(self, index, item): ... sqlalchemy.orm.collections.collection_adapter(collection) Fetch the CollectionAdapter for a collection. sqlalchemy.orm.collections.column_mapped_collection(mapping_spec) A dictionary-based collection type with column-based keying.
100
Returns a MappedCollection factory with a keying function generated from mapping_spec, which may be a Column or a sequence of Columns. The key value must be immutable for the lifetime of the object. You can not, for example, map on foreign key values if those key values will change during the session, i.e. from None to a database-assigned integer after a session ush. sqlalchemy.orm.collections.mapped_collection(keyfunc) A dictionary-based collection type with arbitrary keying. Returns a MappedCollection factory with a keying function generated from keyfunc, a callable that takes an entity and returns a key value. The key value must be immutable for the lifetime of the object. You can not, for example, map on foreign key values if those key values will change during the session, i.e. from None to a database-assigned integer after a session ush. class sqlalchemy.orm.collections.MappedCollection(keyfunc) A basic dictionary-based collection class. Extends dict with the minimal bag semantics that collection classes require. set and remove are implemented in terms of a keying function: any callable that takes an object and returns an object for use as a dictionary key. __init__(keyfunc) Create a new collection with keying provided by keyfunc. keyfunc may be any callable any callable that takes an object and returns an object for use as a dictionary key. The keyfunc will be called every time the ORM needs to add a member by value-only (such as when loading instances from the database) or remove a member. The usual cautions about dictionary keying apply- keyfunc(object) should return the same output for the life of the collection. Keying based on mutable properties can result in unreachable instances lost in the collection. remove(value, _sa_initiator=None) Remove an item by value, consulting the keyfunc for the key. set(value, _sa_initiator=None) Add an item by value, consulting the keyfunc for the key.
def __repr__(self): return self.__class__.__name__ + " " + self.name class Manager(Employee): def __init__(self, name, manager_data): self.name = name self.manager_data = manager_data def __repr__(self): return ( self.__class__.__name__ + " " + self.name + " " + self.manager_data ) class Engineer(Employee): def __init__(self, name, engineer_info): self.name = name self.engineer_info = engineer_info def __repr__(self): return ( self.__class__.__name__ + " " + self.name + " " + self.engineer_info )
102
) managers = Table(managers, metadata, Column(employee_id, Integer, ForeignKey(employees.employee_id), primary_key=True), Column(manager_data, String(50)), ) One natural effect of the joined table inheritance conguration is that the identity of any mapped object can be determined entirely from the base table. This has obvious advantages, so SQLAlchemy always considers the primary key columns of a joined inheritance class to be those of the base table only, unless otherwise manually congured. In other words, the employee_id column of both the engineers and managers table is not used to locate the Engineer or Manager object itself - only the value in employees.employee_id is considered, and the primary key in this case is non-composite. engineers.employee_id and managers.employee_id are still of course critical to the proper operation of the pattern overall as they are used to locate the joined row, once the parent row has been determined, either through a distinct SELECT statement or all at once within a JOIN. We then congure mappers as usual, except we use some additional arguments to indicate the inheritance relationship, the polymorphic discriminator column, and the polymorphic identity of each class; this is the value that will be stored in the polymorphic discriminator column. mapper(Employee, employees, polymorphic_on=employees.c.type, polymorphic_identity=employee) mapper(Engineer, engineers, inherits=Employee, polymorphic_identity=engineer) mapper(Manager, managers, inherits=Employee, polymorphic_identity=manager) And thats it. Querying against Employee will return a combination of Employee, Engineer and Manager objects. Newly saved Engineer, Manager, and Employee objects will automatically populate the employees.type column with engineer, manager, or employee, as appropriate. Basic Control of Which Tables are Queried The with_polymorphic() method of Query affects the specic subclass tables which the Query selects from. Normally, a query such as this: session.query(Employee).all() ...selects only from the employees table. When loading fresh from the database, our joined-table setup will query from the parent table only, using SQL such as this: SELECT employees.employee_id AS employees_employee_id, employees.name AS employees_name, employees.type AS employees_type FROM employees [] As attributes are requested from those Employee objects which are represented in either the engineers or managers child tables, a second load is issued for the columns in that related row, if the data was not already loaded. So above, after accessing the objects youd see further SQL issued along the lines of: SELECT managers.employee_id AS managers_employee_id, managers.manager_data AS managers_manager_data FROM managers WHERE ? = managers.employee_id 2.5. Mapping Class Inheritance Hierarchies 103
[5] SELECT engineers.employee_id AS engineers_employee_id, engineers.engineer_info AS engineers_engineer_info FROM engineers WHERE ? = engineers.employee_id [2] This behavior works well when issuing searches for small numbers of items, such as when using Query.get(), since the full range of joined tables are not pulled in to the SQL statement unnecessarily. But when querying a larger span of rows which are known to be of many types, you may want to actively join to some or all of the joined tables. The with_polymorphic feature of Query and mapper provides this. Telling our query to polymorphically load Engineer and Manager objects: query = session.query(Employee).with_polymorphic([Engineer, Manager]) produces a query which joins the employees table to both the engineers and managers tables like the following: query.all() SELECT employees.employee_id AS employees_employee_id, engineers.employee_id AS engineers_employee_id, managers.employee_id AS managers_employee_id, employees.name AS employees_name, employees.type AS employees_type, engineers.engineer_info AS engineers_engineer_info, managers.manager_data AS managers_manager_data FROM employees LEFT OUTER JOIN engineers ON employees.employee_id = engineers.employee_id LEFT OUTER JOIN managers ON employees.employee_id = managers.employee_id [] with_polymorphic() accepts a single class or mapper, a list of classes/mappers, or the string * to indicate all subclasses: # join to the engineers table query.with_polymorphic(Engineer) # join to the engineers and managers tables query.with_polymorphic([Engineer, Manager]) # join to all subclass tables query.with_polymorphic(*) It also accepts a second argument selectable which replaces the automatic join creation and instead selects directly from the selectable given. This feature is normally used with concrete inheritance, described later, but can be used with any kind of inheritance setup in the case that specialized SQL should be used to load polymorphically: # custom selectable query.with_polymorphic( [Engineer, Manager], employees.outerjoin(managers).outerjoin(engineers) ) with_polymorphic() is also needed when you wish to add lter criteria that are specic to one or more subclasses; it makes the subclasses columns available to the WHERE clause:
104
session.query(Employee).with_polymorphic([Engineer, Manager]).\ filter(or_(Engineer.engineer_info==w, Manager.manager_data==q)) Note that if you only need to load a single subtype, such as just the Engineer objects, with_polymorphic() is not needed since you would query against the Engineer class directly. The mapper also accepts with_polymorphic as a congurational argument so that the joined-style load will be issued automatically. This argument may be the string *, a list of classes, or a tuple consisting of either, followed by a selectable. mapper(Employee, employees, polymorphic_on=employees.c.type, polymorphic_identity=employee, with_polymorphic=*) mapper(Engineer, engineers, inherits=Employee, polymorphic_identity=engineer) mapper(Manager, managers, inherits=Employee, polymorphic_identity=manager) The above mapping will produce a query similar to that of with_polymorphic(*) for every query of Employee objects. Using with_polymorphic() with Query will override the mapper-level with_polymorphic setting. Advanced Control of Which Tables are Queried The Query.with_polymorphic() method and conguration works ne for simplistic scenarios. However, it currently does not work with any Query that selects against individual columns or against multiple classes - it also has to be called at the outset of a query. For total control of how Query joins along inheritance relationships, use the Table objects directly and construct joins manually. For example, to query the name of employees with particular criterion: session.query(Employee.name).\ outerjoin((engineer, engineer.c.employee_id==Employee.employee_id)).\ outerjoin((manager, manager.c.employee_id==Employee.employee_id)).\ filter(or_(Engineer.engineer_info==w, Manager.manager_data==q)) The base table, in this case the employees table, isnt always necessary. A SQL query is always more efcient with fewer joins. Here, if we wanted to just load information specic to managers or engineers, we can instruct Query to use only those tables. The FROM clause is determined by whats specied in the Session.query(), Query.filter(), or Query.select_from() methods: session.query(Manager.manager_data).select_from(manager) session.query(engineer.c.id).\ filter(engineer.c.engineer_info==manager.c.manager_data) Creating Joins to Specic Subtypes The of_type() method is a helper which allows the construction of joins along relationship() paths while narrowing the criterion to specic subclasses. Suppose the employees table represents a collection of employees which are associated with a Company object. Well add a company_id column to the employees table and a new table companies: companies = Table(companies, metadata, Column(company_id, Integer, primary_key=True), Column(name, String(50)) ) 2.5. Mapping Class Inheritance Hierarchies 105
employees = Table(employees, metadata, Column(employee_id, Integer, primary_key=True), Column(name, String(50)), Column(type, String(30), nullable=False), Column(company_id, Integer, ForeignKey(companies.company_id)) ) class Company(object): pass mapper(Company, companies, properties={ employees: relationship(Employee) }) When querying from Company onto the Employee relationship, the join() method as well as the any() and has() operators will create a join from companies to employees, without including engineers or managers in the mix. If we wish to have criterion which is specically against the Engineer class, we can tell those methods to join or subquery against the joined table representing the subclass using the of_type() operator: session.query(Company).\ join(Company.employees.of_type(Engineer)).\ filter(Engineer.engineer_info==someinfo) A longhand version of this would involve spelling out the full target selectable within a 2-tuple: session.query(Company).\ join((employees.join(engineers), Company.employees)).\ filter(Engineer.engineer_info==someinfo) Currently, of_type() accepts a single class argument. It may be expanded later on to accept multiple classes. For now, to join to any group of subclasses, the longhand notation allows this exibility: session.query(Company).\ join( (employees.outerjoin(engineers).outerjoin(managers), Company.employees) ).\ filter( or_(Engineer.engineer_info==someinfo, Manager.manager_data==somedata) ) The any() and has() operators also can be used with of_type() when the embedded criterion is in terms of a subclass: session.query(Company).\ filter( Company.employees.of_type(Engineer). any(Engineer.engineer_info==someinfo) ).all() Note that the any() and has() are both shorthand for a correlated EXISTS query. To build one by hand looks like: session.query(Company).filter( exists([1], and_(Engineer.engineer_info==someinfo, employees.c.company_id==companies.c.company_id), from_obj=employees.join(engineers)
106
) ).all() The EXISTS subquery above selects from the join of employees to engineers, and also species criterion which correlates the EXISTS subselect back to the parent companies table.
107
Notice in this case there is no type column. If polymorphic loading is not required, theres no advantage to using inherits here; you just dene a separate mapper for each class. mapper(Employee, employees_table) mapper(Manager, managers_table) mapper(Engineer, engineers_table) To load polymorphically, the with_polymorphic argument is required, along with a selectable indicating how rows should be loaded. In this case we must construct a UNION of all three tables. SQLAlchemy includes a helper function to create these called polymorphic_union(), which will map all the different columns into a structure of selects with the same numbers and names of columns, and also generate a virtual type column for each subselect: pjoin = polymorphic_union({ employee: employees_table, manager: managers_table, engineer: engineers_table }, type, pjoin) employee_mapper = mapper(Employee, employees_table, with_polymorphic=(*, pjoin), polymorphic_on=pjoin.c.type, polymorphic_identity=employee) manager_mapper = mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity=manager) engineer_mapper = mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity=engineer) Upon select, the polymorphic union produces a query like this: session.query(Employee).all() SELECT pjoin.type AS pjoin_type, pjoin.manager_data AS pjoin_manager_data, pjoin.employee_id AS pjoin_employee_id, pjoin.name AS pjoin_name, pjoin.engineer_info AS pjoin_engineer_info FROM ( SELECT employees.employee_id AS employee_id, CAST(NULL AS VARCHAR(50)) AS manager_data, employees.name AS name, CAST(NULL AS VARCHAR(50)) AS engineer_info, employee AS type FROM employees UNION ALL SELECT managers.employee_id AS employee_id, managers.manager_data AS manager_data, managers.name AS name, CAST(NULL AS VARCHAR(50)) AS engineer_info, manager AS type FROM managers UNION ALL SELECT engineers.employee_id AS employee_id, CAST(NULL AS VARCHAR(50)) AS manager_data, engineers.name AS name, engineers.engineer_info AS engineer_info, engineer AS type FROM engineers ) AS pjoin []
108
Concrete Inheritance with Declarative As of 0.7.3, the Declarative module includes helpers for concrete inheritance. See Using the Concrete Helpers for more information.
109
engineers_table = Table(engineers, metadata, Column(employee_id, Integer, primary_key=True), Column(name, String(50)), Column(engineer_info, String(50)), Column(company_id, Integer, ForeignKey(companies.id)) ) mapper(Employee, employees_table, with_polymorphic=(*, pjoin), polymorphic_on=pjoin.c.type, polymorphic_identity=employee) mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity=manager) mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity=engineer) mapper(Company, companies, properties={ employees: relationship(Employee) }) The big limitation with concrete table inheritance is that relationship() objects placed on each concrete mapper do not propagate to child mappers. If you want to have the same relationship() objects set up on all concrete mappers, they must be congured manually on each. To congure back references in such a conguration the back_populates keyword may be used instead of backref, such as below where both A(object) and B(A) bidirectionally reference C: ajoin = polymorphic_union({ a:a_table, b:b_table }, type, ajoin) mapper(A, a_table, with_polymorphic=(*, ajoin), polymorphic_on=ajoin.c.type, polymorphic_identity=a, properties={ some_c:relationship(C, back_populates=many_a) }) mapper(B, b_table,inherits=A, concrete=True, polymorphic_identity=b, properties={ some_c:relationship(C, back_populates=many_a) }) mapper(C, c_table, properties={ many_a:relationship(A, collection_class=set, back_populates=some_c), })
110
111
# work with sess myobject = MyObject(foo, bar) session.add(myobject) session.commit() Above, the sessionmaker() call creates a class for us, which we assign to the name Session. This class is a subclass of the actual Session class, which when instantiated, will use the arguments weve given the function, in this case to use a particular Engine for connection resources. A typical setup will associate the sessionmaker() with an Engine, so that each Session generated will use this Engine to acquire connection resources. This association can be set up as in the example above, using the bind argument. When you write your application, place the result of the sessionmaker() call at the global level. The resulting Session class, congured for your application, should then be used by the rest of the applcation as the source of new Session instances. An extremely common step taken by applications, including virtually all web applications, is to further wrap the sessionmaker() construct in a so-called contextual session, provided by the scoped_session() construct. This construct places the sessionmaker() into a registry that maintains a single Session per application thread. Information on using contextual sessions is at Contextual/Thread-local Sessions. Adding Additional Conguration to an Existing sessionmaker() A common scenario is where the sessionmaker() is invoked at module import time, however the generation of one or more Engine instances to be associated with the sessionmaker() has not yet proceeded. For this use case, the sessionmaker() construct offers the sessionmaker.configure() method, which will place additional conguration directives into an existing sessionmaker() that will take place when the construct is invoked: from sqlalchemy.orm import sessionmaker from sqlalchemy import create_engine # configure Session class with desired options Session = sessionmaker() # later, we create the engine engine = create_engine(postgresql://...) # associate it with our custom Session class Session.configure(bind=engine) # work with the session session = Session() Creating Ad-Hoc Session Objects with Alternate Arguments For the use case where an application needs to create a new Session with special arguments that deviate from what is normally used throughout the application, such as a Session that binds to an alternate source of connectivity, or a Session that should have other arguments such as expire_on_commit established differently from what most of the application wants, specic arguments can be passed to the sessionmaker() constructs class itself. These arguments will override whatever congurations have already been placed, such as below, where a new Session is constructed against a specic Connection: # at the module level, the global sessionmaker, # bound to a specific Engine 112 Chapter 2. SQLAlchemy ORM
Session = sessionmaker(bind=engine) # later, some unit of code wants to create a # Session that is bound to a specific Connection conn = engine.connect() session = Session(bind=conn) The typical rationale for the association of a Session with a specic Connection is that of a test xture that maintains an external transaction - see Joining a Session into an External Transaction for an example of this.
Objects become detached if their owning session is discarded. They are still functional in the detached state if the user has ensured that their state has not been expired before detachment, but they will not be able to represent the current state of database data. Because of this, its best to consider persisted objects as an extension of the state of a particular Session, and to keep that session around until all referenced objects have been discarded. An exception to this is when objects are placed in caches or otherwise shared among threads or processes, in which case their detached state can be stored, transmitted, or shared. However, the state of detached objects should still be transferred back into a new Session using Session.add() or Session.merge() before working with the object (or in the case of merge, its state) again. It is also very common that a Session as well as its associated objects are only referenced by a single thread. Sharing objects between threads is most safely accomplished by sharing their state among multiple instances of those objects, each associated with a distinct Session per thread, Session.merge() to transfer state between threads. This pattern is not a strict requirement by any means, but it has the least chance of introducing concurrency issues. To help with the recommended Session -per-thread, Session -per-set-of-objects patterns, the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread. More on this object is in Contextual/Thread-local Sessions. Is the Session a cache ? Yeee...no. Its somewhat used as a cache, in that it implements the identity map pattern, and stores objects keyed to their primary key. However, it doesnt do any kind of query caching. This means, if you say session.query(Foo).filter_by(name=bar), even if Foo(name=bar) is right there, in the identity map, the session has no idea about that. It has to issue SQL to the database, get the rows back, and then when it sees the primary key in the row, then it can look in the local identity map and see that the object is already there. Its only when you say query.get({some primary key}) that the Session doesnt have to issue a query. Additionally, the Session stores object instances using a weak reference by default. This also defeats the purpose of using the Session as a cache. The Session is not designed to be a global object from which everyone consults as a registry of objects. Thats more the job of a second level cache. SQLAlchemy provides a pattern for implementing second level caching using Beaker, via the Beaker Caching example. How can I get the Session for a certain object ? Use the object_session() classmethod available on Session: session = Session.object_session(someobject) Is the session thread-safe? Nope. It has no thread synchronization of any kind built in, and particularly when you do a ush operation, it denitely is not open to concurrent threads accessing it, because it holds onto a single database connection at that point. If you use a session which is non-transactional (meaning, autocommit is set to True, not the default setting) for read operations only, its still not threadsafe, but you also wont get any catastrophic failures either, since it checks out and returns connections to the connection pool on an as-needed basis; its just that different threads might load the same objects independently of each other, but only one will wind up in the identity map (however, the other one might still live in a collection somewhere). But the bigger point here is, you should not want to use the session with multiple concurrent threads. That would be like having everyone at a restaurant all eat from the same plate. The session is a local workspace that you use for a specic set of tasks; you dont want to, or need to, share that session
114
with other threads who are doing some other task. If, on the other hand, there are other threads participating in the same task you are, such as in a desktop graphical application, then you would be sharing the session with those threads, but you also will have implemented a proper locking scheme (or your graphical framework does) so that those threads do not collide. A multithreaded application is usually going to want to make usage of scoped_session() to transparently manage sessions per thread. More on this at Contextual/Thread-local Sessions. Querying The query() function takes one or more entities and returns a new Query object which will issue mapper queries within the context of this Session. An entity is dened as a mapped class, a Mapper object, an orm-enabled descriptor, or an AliasedClass object: # query from a class session.query(User).filter_by(name=ed).all() # query with multiple classes, returns tuples session.query(User, Address).join(addresses).filter_by(name=ed).all() # query using orm-enabled descriptors session.query(User.name, User.fullname).all() # query from a mapper user_mapper = class_mapper(User) session.query(user_mapper) When Query returns results, each object instantiated is stored within the identity map. When a row matches an object which is already present, the same object is returned. In the latter case, whether or not the row is populated onto an existing object depends upon whether the attributes of the instance have been expired or not. A defaultcongured Session automatically expires all instances along transaction boundaries, so that with a normally isolated transaction, there shouldnt be any issue of instances representing data which is stale with regards to the current transaction. The Query object is introduced in great detail in Object Relational Tutorial, and further documented in Querying. Adding New or Existing Items add() is used to place instances in the session. For transient (i.e. brand new) instances, this will have the effect of an INSERT taking place for those instances upon the next ush. For instances which are persistent (i.e. were loaded by this session), they are already present and do not need to be added. Instances which are detached (i.e. have been removed from a session) may be re-associated with a session using this method: user1 = User(name=user1) user2 = User(name=user2) session.add(user1) session.add(user2) session.commit() # write changes to the database
To add a list of items to the session at once, use add_all(): session.add_all([item1, item2, item3]) The add() operation cascades along the save-update cascade. For more details see the section Cascades.
115
Merging merge() reconciles the current state of an instance and its associated children with existing data in the database, and returns a copy of the instance associated with the session. Usage is as follows: merged_object = session.merge(existing_object) When given an instance, it follows these steps: It examines the primary key of the instance. If its present, it attempts to load an instance with that primary key (or pulls from the local identity map). If theres no primary key on the given instance, or the given primary key does not exist in the database, a new instance is created. The state of the given instance is then copied onto the located/newly created instance. The operation is cascaded to associated child items along the merge cascade. Note that all changes present on the given instance, including changes to collections, are merged. The new instance is returned. With merge(), the given instance is not placed within the session, and can be associated with a different session or detached. merge() is very useful for taking the state of any kind of object structure without regard for its origins or current session associations and placing that state within a session. Heres two examples: An application which reads an object structure from a le and wishes to save it to the database might parse the le, build up the structure, and then use merge() to save it to the database, ensuring that the data within the le is used to formulate the primary key of each element of the structure. Later, when the le has changed, the same process can be re-run, producing a slightly different object structure, which can then be merged in again, and the Session will automatically update the database to reect those changes. A web application stores mapped entities within an HTTP session object. When each request starts up, the serialized data can be merged into the session, so that the original entity may be safely shared among requests and threads. merge() is frequently used by applications which implement their own second level caches. This refers to an application which uses an in memory dictionary, or an tool like Memcached to store objects over long running spans of time. When such an object needs to exist within a Session, merge() is a good choice since it leaves the original cached object untouched. For this use case, merge provides a keyword option called load=False. When this boolean ag is set to False, merge() will not issue any SQL to reconcile the given object against the current state of the database, thereby reducing query overhead. The limitation is that the given object and all of its children may not contain any pending changes, and its also of course possible that newer information in the database will not be present on the merged object, since no load is issued.
Merge Tips
merge() is an extremely useful method for many purposes. However, it deals with the intricate border between objects that are transient/detached and those that are persistent, as well as the automated transferrence of state. The wide variety of scenarios that can present themselves here often require a more careful approach to the state of objects. Common problems with merge usually involve some unexpected state regarding the object being passed to merge(). Lets use the canonical example of the User and Address objects: class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False)
116
addresses = relationship("Address", backref="user") class Address(Base): __tablename__ = address id = Column(Integer, primary_key=True) email_address = Column(String(50), nullable=False) user_id = Column(Integer, ForeignKey(user.id), nullable=False) Assume a User object with one Address, already persistent: >>> u1 = User(name=ed, addresses=[Address([email protected])]) >>> session.add(u1) >>> session.commit() We now create a1, an object outside the session, which wed like to merge on top of the existing Address: >>> existing_a1 = u1.addresses[0] >>> a1 = Address(id=existing_a1.id) A surprise would occur if we said this: >>> a1.user = u1 >>> a1 = session.merge(a1) >>> session.commit() sqlalchemy.orm.exc.FlushError: New instance <Address at 0x1298f50> with identity key (<class __main__.Address>, (1,)) conflicts with persistent instance <Address at 0x12a25d0> Why is that ? We werent careful with our cascades. The assignment of a1.user to a persistent object cascaded to the backref of User.addresses and made our a1 object pending, as though we had added it. Now we have two Address objects in the session: >>> a1 = Address() >>> a1.user = u1 >>> a1 in session True >>> existing_a1 in session True >>> a1 is existing_a1 False Above, our a1 is already pending in the session. The subsequent merge() operation essentially does nothing. Cascade can be congured via the cascade option on relationship(), although in this case it would mean removing the save-update cascade from the User.addresses relationship - and usually, that behavior is extremely convenient. The solution here would usually be to not assign a1.user to an object already persistent in the target session. Note that a new relationship() option introduced in 0.6.5, cascade_backrefs=False, will also prevent the Address from being added to the session via the a1.user = u1 assignment. Further detail on cascade operation is at Cascades. Another example of unexpected state: >>> >>> >>> >>> >>> a1 = Address(id=existing_a1.id, user_id=u1.id) assert a1.user is None True a1 = session.merge(a1) session.commit()
117
sqlalchemy.exc.IntegrityError: (IntegrityError) address.user_id may not be NULL Here, we accessed a1.user, which returned its default value of None, which as a result of this access, has been placed in the __dict__ of our object a1. Normally, this operation creates no change event, so the user_id attribute takes precedence during a ush. But when we merge the Address object into the session, the operation is equivalent to: >>> existing_a1.id = existing_a1.id >>> existing_a1.user_id = u1.id >>> existing_a1.user = None Where above, both user_id and user are assigned to, and change events are emitted for both. The user association takes precedence, and None is applied to user_id, causing a failure. Most merge() issues can be examined by rst checking - is the object prematurely in the session ? >>> a1 = Address(id=existing_a1, user_id=user.id) >>> assert a1 not in session >>> a1 = session.merge(a1) Or is there state on the object that we dont want ? Examining __dict__ is a quick way to check: >>> a1 = Address(id=existing_a1, user_id=user.id) >>> a1.user >>> a1.__dict__ {_sa_instance_state: <sqlalchemy.orm.state.InstanceState object at 0x1298d10>, user_id: 1, id: 1, user: None} >>> # we dont want user=None merged, remove it >>> del a1.user >>> a1 = session.merge(a1) >>> # success >>> session.commit() Deleting The delete() method places an instance into the Sessions list of objects to be marked as deleted: # mark two objects to be deleted session.delete(obj1) session.delete(obj2) # commit (or flush) session.commit()
118
>>> address in user.addresses True When the above session is committed, all attributes are expired. The next access of user.addresses will re-load the collection, revealing the desired state: >>> session.commit() >>> address in user.addresses False The usual practice of deleting items within collections is to forego the usage of delete() directly, and instead use cascade behavior to automatically invoke the deletion as a result of removing the object from the parent collection. The delete-orphan cascade accomplishes this, as illustrated in the example below: mapper(User, users_table, properties={ addresses:relationship(Address, cascade="all, delete, delete-orphan") }) del user.addresses[1] session.flush() Where above, upon removing the Address object from the User.addresses collection, the delete-orphan cascade has the effect of marking the Address object for deletion in the same way as passing it to delete(). See also Cascades for detail on cascades.
119
flush() creates its own transaction and commits it. Any failures during ush will always result in a rollback of whatever transaction is present. If the Session is not in autocommit=True mode, an explicit call to rollback() is required after a ush fails, even though the underlying transaction will have been rolled back already - this is so that the overall nesting pattern of so-called subtransactions is consistently maintained. Committing commit() is used to commit the current transaction. It always issues flush() beforehand to ush any remaining state to the database; this is independent of the autoush setting. If no transaction is present, it raises an error. Note that the default behavior of the Session is that a transaction is always present; this behavior can be disabled by setting autocommit=True. In autocommit mode, a transaction can be initiated by calling the begin() method. Note: The term transaction here refers to a transactional construct within the Session itself which may be maintaining zero or more actual database (DBAPI) transactions. An individual DBAPI connection begins participation in the transaction as it is rst used to execute a SQL statement, then remains present until the session-level transaction is completed. See Managing Transactions for further detail. Another behavior of commit() is that by default it expires the state of all instances present after the commit is complete. This is so that when the instances are next accessed, either through attribute access or by them being present in a Query result set, they receive the most recent state. To disable this behavior, congure sessionmaker() with expire_on_commit=False. Normally, instances loaded into the Session are never changed by subsequent queries; the assumption is that the current transaction is isolated so the state most recently loaded is correct as long as the transaction continues. Setting autocommit=True works against this model to some degree since the Session behaves in exactly the same way with regard to attribute state, except no transaction is present. Rolling Back rollback() rolls back the current transaction. With a default congured session, the post-rollback state of the session is as follows: All transactions are rolled back and all connections returned to the connection pool, unless the Session was bound directly to a Connection, in which case the connection is still maintained (but still rolled back). Objects which were initially in the pending state when they were added to the Session within the lifespan of the transaction are expunged, corresponding to their INSERT statement being rolled back. The state of their attributes remains unchanged. Objects which were marked as deleted within the lifespan of the transaction are promoted back to the persistent state, corresponding to their DELETE statement being rolled back. Note that if those objects were rst pending within the transaction, that operation takes precedence instead. All objects not expunged are fully expired. With that state understood, the Session may safely continue usage after a rollback occurs. When a flush() fails, typically for reasons like primary key, foreign key, or not nullable constraint violations, a rollback() is issued automatically (its currently not possible for a ush to continue after a partial failure). However, the ush process always uses its own transactional demarcator called a subtransaction, which is described more fully in the docstrings for Session. What it means here is that even though the database transaction has been rolled back, the end user must still issue rollback() to fully reset the state of the Session.
120
Expunging Expunge removes an object from the Session, sending persistent instances to the detached state, and pending instances to the transient state: session.expunge(obj1) To remove all items, call expunge_all() (this method was formerly known as clear()). Closing The close() method issues a expunge_all(), and releases any transactional/connection resources. When connections are returned to the connection pool, transactional state is rolled back as well. Refreshing / Expiring The Session normally works in the context of an ongoing transaction (with the default setting of autoush=False). Most databases offer isolated transactions - this refers to a series of behaviors that allow the work within a transaction to remain consistent as time passes, regardless of the activities outside of that transaction. A key feature of a high degree of transaction isolation is that emitting the same SELECT statement twice will return the same results as when it was called the rst time, even if the data has been modied in another transaction. For this reason, the Session gains very efcient behavior by loading the attributes of each instance only once. Subsequent reads of the same row in the same transaction are assumed to have the same value. The user application also gains directly from this assumption, that the transaction is regarded as a temporary shield against concurrent changes - a good application will ensure that isolation levels are set appropriately such that this assumption can be made, given the kind of data being worked with. To clear out the currently loaded state on an instance, the instance or its individual attributes can be marked as expired, which results in a reload to occur upon next access of any of the instances attrbutes. The instance can also be immediately reloaded from the database. The expire() and refresh() methods achieve this: # immediately re-load attributes on obj1, obj2 session.refresh(obj1) session.refresh(obj2) # expire objects obj1, obj2, attributes will be reloaded # on the next access: session.expire(obj1) session.expire(obj2) When an expired object reloads, all non-deferred column-based attributes are loaded in one query. Current behavior for expired relationship-based attributes is that they load individually upon access - this behavior may be enhanced in a future release. When a refresh is invoked on an object, the ultimate operation is equivalent to a Query.get(), so any relationships congured with eager loading should also load within the scope of the refresh operation. refresh() and expire() also support being passed a list of individual attribute names in which to be refreshed. These names can refer to any attribute, column-based or relationship based: # immediately re-load the attributes hello, world on obj1, obj2 session.refresh(obj1, [hello, world]) session.refresh(obj2, [hello, world]) # expire the attributes hello, world objects obj1, obj2, attributes will be reloaded # on the next access: session.expire(obj1, [hello, world]) session.expire(obj2, [hello, world]) 2.6. Using the Session 121
The full contents of the session may be expired at once using expire_all(): session.expire_all() Note that expire_all() is called automatically whenever commit() or rollback() are called. If using the session in its default mode of autocommit=False and with a well-isolated transactional environment (which is provided by most backends with the notable exception of MySQL MyISAM), there is virtually no reason to ever call expire_all() directly - plenty of state will remain on the current transaction until it is rolled back or committed or otherwise removed. refresh() and expire() similarly are usually only necessary when an UPDATE or DELETE has been issued manually within the transaction using Session.execute(). Session Attributes The Session itself acts somewhat like a set-like collection. All items present may be accessed using the iterator interface: for obj in session: print obj And presence may be tested for using regular contains semantics: if obj in session: print "Object is present" The session is also keeping track of all newly created (i.e. pending) objects, all objects which have had changes since they were last loaded or saved (i.e. dirty), and everything thats been marked as deleted: # pending objects recently added to the Session session.new # persistent objects which currently have changes detected # (this collection is now created on the fly each time the property is called) session.dirty # persistent objects that have been marked as deleted via session.delete(obj) session.deleted Note that objects within the session are by default weakly referenced. This means that when they are dereferenced in the outside application, they fall out of scope from within the Session as well and are subject to garbage collection by the Python interpreter. The exceptions to this include objects which are pending, objects which are marked as deleted, or persistent objects which have pending changes on them. After a full ush, these collections are all empty, and all objects are again weakly referenced. To disable the weak referencing behavior and force all objects within the session to remain until explicitly expunged, congure sessionmaker() with the weak_identity_map=False setting.
2.6.4 Cascades
Mappers support the concept of congurable cascade behavior on relationship() constructs. This behavior controls how the Session should treat the instances that have a parent-child relationship with another instance that is operated upon by the Session. Cascade is indicated as a comma-separated list of string keywords, with the possible values all, delete, save-update, refresh-expire, merge, expunge, and delete-orphan. Cascading is congured by setting the cascade keyword argument on a relationship(): mapper(Order, order_table, properties={ items : relationship(Item, cascade="all, delete-orphan"), 122 Chapter 2. SQLAlchemy ORM
customer : relationship(User, secondary=user_orders_table, cascade="save-update"), }) The above mapper species two relationships, items and customer. The items relationship species all, deleteorphan as its cascade value, indicating that all add, merge, expunge, refresh delete and expire operations performed on a parent Order instance should also be performed on the child Item instances attached to it. The delete-orphan cascade value additionally indicates that if an Item instance is no longer associated with an Order, it should also be deleted. The all, delete-orphan cascade argument allows a so-called lifecycle relationship between an Order and an Item object. The customer relationship species only the save-update cascade value, indicating most operations will not be cascaded from a parent Order instance to a child User instance except for the add() operation. save-update cascade indicates that an add() on the parent will cascade to all child items, and also that items added to a parent which is already present in a session will also be added to that same session. save-update cascade also cascades the pending history of a relationship()-based attribute, meaning that objects which were removed from a scalar or collection attribute whose changes have not yet been ushed are also placed into the new session - this so that foreign key clear operations and deletions will take place (new in 0.6). Note that the delete-orphan cascade only functions for relationships where the target object can have a single parent at a time, meaning it is only appropriate for one-to-one or one-to-many relationships. For a relationship() which establishes one-to-one via a local foreign key, i.e. a many-to-one that stores only a single parent, or one-toone/one-to-many via a secondary (association) table, a warning will be issued if delete-orphan is congured. To disable this warning, specify the single_parent=True ag on the relationship, which constrains objects to allow attachment to only one parent at a time. The default value for cascade on relationship() is save-update, merge. save-update cascade also takes place on backrefs by default. This means that, given a mapping such as this: mapper(Order, order_table, properties={ items : relationship(Item, backref=order) }) If an Order is already in the session, and is assigned to the order attribute of an Item, the backref appends the Order to the items collection of that Order, resulting in the save-update cascade taking place: >>> o1 = Order() >>> session.add(o1) >>> o1 in session True >>> i1 = Item() >>> i1.order = o1 >>> i1 in o1.items True >>> i1 in session True This behavior can be disabled as of 0.6.5 using the cascade_backrefs ag: mapper(Order, order_table, properties={ items : relationship(Item, backref=order, cascade_backrefs=False) }) So above, the assignment of i1.order = o1 will append i1 to the items collection of o1, but will not add i1 to the session. You can, of course, add() i1 to the session at a later point. This option may be helpful for situations where an object needs to be kept out of a session until its construction is completed, but still needs to be given associations to objects which are already persistent in the target session.
123
124
# on rollback, the same closure of state # as that of commit proceeds. session.rollback() raise Using SAVEPOINT SAVEPOINT transactions, if supported by the underlying engine, may be delineated using the begin_nested() method: Session = sessionmaker() session = Session() session.add(u1) session.add(u2) session.begin_nested() # establish a savepoint session.add(u3) session.rollback() # rolls back u3, keeps u1 and u2 session.commit() # commits u1 and u2 begin_nested() may be called any number of times, which will issue a new SAVEPOINT with a unique identier for each call. For each begin_nested() call, a corresponding rollback() or commit() must be issued. When begin_nested() is called, a flush() is unconditionally issued (regardless of the autoflush setting). This is so that when a rollback() occurs, the full state of the session is expired, thus causing all subsequent attribute/instance access to reference the full state of the Session right before begin_nested() was called. Autocommit Mode The example of Session transaction lifecycle illustrated at the start of Managing Transactions applies to a Session congured in the default mode of autocommit=False. Constructing a Session with autocommit=True produces a Session placed into autocommit mode, where each SQL statement invoked by a Session.query() or Session.execute() occurs using a new connection from the connection pool, discarding it after results have been iterated. The Session.flush() operation still occurs within the scope of a single transaction, though this transaction is closed out after the Session.flush() operation completes. autocommit mode should not be considered for general use. While very old versions of SQLAlchemy standardized on this mode, the modern Session benets highly from being given a clear point of transaction demarcation via Session.rollback() and Session.commit(). The autoush action can safely emit SQL to the database as needed without implicitly producing permanent effects, the contents of attributes are expired only when a logical series of steps has completed. If the Session were to be used in pure autocommit mode without an ongoing transaction, these features should be disabled, that is, autoflush=False, expire_on_commit=False. Modern usage of autocommit is for framework integrations that need to control specically when the begin state occurs. A session which is congured with autocommit=True may be placed into the begin state using the Session.begin() method. After the cycle completes upon Session.commit() or Session.rollback(), connection and transaction resources are released and the Session goes back into autocommit mode, until Session.begin() is called again: Session = sessionmaker(bind=engine, autocommit=True) session = Session() session.begin() try: item1 = session.query(Item).get(1) item2 = session.query(Item).get(2) 2.6. Using the Session 125
item1.foo = bar item2.bar = foo session.commit() except: session.rollback() raise The Session.begin() method also returns a transactional token which is compatible with the Python 2.6 with statement: Session = sessionmaker(bind=engine, autocommit=True) session = Session() with session.begin(): item1 = session.query(Item).get(1) item2 = session.query(Item).get(2) item1.foo = bar item2.bar = foo
session = Session(autocommit=True) method_a(session) session.close() Subtransactions are used by the Session.flush() process to ensure that the ush operation takes place within a transaction, regardless of autocommit. When autocommit is disabled, it is still useful in that it forces the Session into a pending rollback state, as a failed ush cannot be resumed in mid-operation, where the end user still maintains the scope of the transaction overall. Enabling Two-Phase Commit For backends which support two-phase operaration (currently MySQL and PostgreSQL), the session can be instructed to use two-phase commit semantics. This will coordinate the committing of transactions across databases so that the transaction is either committed or rolled back in all databases. You can also prepare() the session for interacting with transactions not managed by SQLAlchemy. To use two phase transactions set the ag twophase=True on the session: engine1 = create_engine(postgresql://db1) engine2 = create_engine(postgresql://db2) Session = sessionmaker(twophase=True) # bind User operations to engine 1, Account operations to engine 2 Session.configure(binds={User:engine1, Account:engine2}) session = Session() # .... work with accounts and users # commit. session will issue a flush to all DBs, and a prepare step to all DBs, # before committing both transactions session.commit()
127
# need to specify mapper or class when executing result = session.execute("select * from table where id=:id", {id:7}, mapper=MyMappedClass result = session.execute(select([mytable], mytable.c.id==7), mapper=MyMappedClass) connection = session.connection(MyMappedClass)
engine = create_engine(postgresql://...) class SomeTest(TestCase): def setUp(self): # connect to the database self.connection = engine.connect() # begin a non-ORM transaction
128
self.trans = connection.begin() # bind an individual Session to the connection self.session = Session(bind=self.connection) def test_something(self): # use the session in tests. self.session.add(Foo()) self.session.commit() def tearDown(self): # rollback - everything that happened with the # Session above (including calls to commit()) # is rolled back. self.trans.rollback() self.session.close() Above, we issue Session.commit() as well as Transaction.rollback(). This is an example of where we take advantage of the Connection objects ability to maintain subtransactions, or nested begin/commit-or-rollback pairs where only the outermost begin/commit pair actually commits the transaction, or if the outermost block rolls back, everything is rolled back.
>>> # later, in the same application thread, someone else calls Session() >>> session2 = Session() >>> # the two Session objects are *the same* object >>> session is session2 True
129
Since the Session() constructor now returns the same Session object every time within the current thread, the object returned by scoped_session() also implements most of the Session methods and properties at the class level, such that you dont even need to instantiate Session(): # create some objects u1 = User() u2 = User() # save to the contextual session, without instantiating Session.add(u1) Session.add(u2) # view the "new" attribute assert u1 in Session.new # commit changes Session.commit() The contextual session may be disposed of by calling Session.remove(): # remove current contextual session Session.remove() After remove() is called, the next operation with the contextual session will start a new Session for the current thread. Lifespan of a Contextual Session A (really, really) common question is when does the contextual session get created, when does it get disposed ? Well consider a typical lifespan as used in a web application: Web Server -------------web request -> Web Framework -------------call controller -> User-defined Controller Call -----------------------------# call Session(). this establishes a new, # contextual Session. session = Session() # load some objects, save some changes objects = session.query(MyClass).all() # some other code calls Session, its the # same contextual session as "sess" session2 = Session() session2.add(foo) session2.commit() # generate content to be returned return generate_content() Session.remove() <web response <The above example illustrates an explicit call to ScopedSession.remove(). This has the effect such that each web request starts fresh with a brand new session, and is the most denitive approach to closing out a request. Its not strictly necessary to remove the session at the end of the request - other options include calling Session.close(), Session.rollback(), Session.commit() at the end so that the existing session 130 Chapter 2. SQLAlchemy ORM
returns its connections to the pool and removes any existing transactional context. Doing nothing is an option too, if individual controller methods take responsibility for ensuring that no transactions remain open after a request ends. Contextual Session API sqlalchemy.orm.scoped_session(session_factory, scopefunc=None) Provides thread-local or scoped management of Session objects. This is a front-end function to ScopedSession: Session = scoped_session(sessionmaker(autoflush=True)) To instantiate a Session object which is part of the scoped context, instantiate normally: session = Session() Most session methods are available as classmethods from the scoped session: Session.commit() Session.close() See also: Contextual/Thread-local Sessions. Parameters session_factory a callable function that produces Session instances, such as sessionmaker(). scopefunc Optional scope function which would be passed to the ScopedRegistry. If None, the ThreadLocalRegistry is used by default. Returns a ScopedSession instance class sqlalchemy.orm.scoping.ScopedSession(session_factory, scopefunc=None) Provides thread-local management of Sessions. Typical invocation is via the scoped_session() function: Session = scoped_session(sessionmaker()) The internal registry is accessible, and by default is an instance of ThreadLocalRegistry. See also: Contextual/Thread-local Sessions. configure(**kwargs) recongure the sessionmaker used by this ScopedSession. query_property(query_cls=None) return a class property which produces a Query object against the class when called. e.g.: Session = scoped_session(sessionmaker()) class MyClass(object): query = Session.query_property() # after mappers are defined result = MyClass.query.filter(MyClass.name==foo).all() 2.6. Using the Session 131
Produces instances of the sessions congured query class by default. To override and use a custom implementation, provide a query_cls callable. The callable will be invoked with the classs mapper as a positional argument and a session keyword argument. There is no limit to the number of query properties placed on a class. remove() Dispose of the current contextual session. class sqlalchemy.util.ScopedRegistry(createfunc, scopefunc) A Registry that can store one or multiple instances of a single class on the basis of a scope function. The object implements __call__ as the getter, so by calling myregistry() the contained object is returned for the current scope. Parameters createfunc a callable that returns a new object to be placed in the registry scopefunc a callable that will return a key to store/retrieve an object. __init__(createfunc, scopefunc) Construct a new ScopedRegistry. Parameters createfunc A creation function that will generate a new value for the current scope, if none is present. scopefunc A function that returns a hashable token representing the current scope (such as, current thread identier). clear() Clear the current scope, if any. has() Return True if an object is present in the current scope. set(obj) Set the value forthe current scope. class sqlalchemy.util.ThreadLocalRegistry(createfunc) A ScopedRegistry that uses a threading.local() variable for storage.
132
Horizontal Partitioning Horizontal partitioning partitions the rows of a single table (or a set of tables) across multiple databases. See the sharding example: Horizontal Sharding.
133
__init__(bind=None, autoush=True, expire_on_commit=True, _enable_transaction_accounting=True, autocommit=False, twophase=False, weak_identity_map=True, binds=None, extension=None, query_cls=<class sqlalchemy.orm.query.Query>) Construct a new Session. See also the sessionmaker() function which is used to generate a Session-producing callable with a given set of arguments. Parameters autocommit Defaults to False. When True, the Session does not keep a persistent transaction running, and will acquire connections from the engine on an as-needed basis, returning them immediately after their use. Flushes will begin and commit (or possibly rollback) their own transaction if no transaction is present. When using this mode, the session.begin() method may be used to begin a transaction explicitly. Leaving it on its default value of False means that the Session will acquire a connection and begin a transaction the rst time it is used, which it will maintain persistently until rollback(), commit(), or close() is called. When the transaction is released by any of these methods, the Session is ready for the next usage, which will again acquire and maintain a new connection/transaction. autoush When True, all query operations will issue a flush() call to this Session before proceeding. This is a convenience feature so that flush() need not be called repeatedly in order for database queries to retrieve results. Its typical that autoflush is used in conjunction with autocommit=False. In this scenario, explicit calls to flush() are rarely needed; you usually only need to call commit() (which ushes) to nalize changes. bind An optional Engine or Connection to which this Session should be bound. When specied, all SQL operations performed by this session will execute via this connectable. binds An optional dictionary which contains more granular bind information than the bind parameter provides. This dictionary can map individual Table instances as well as Mapper instances to individual Engine or Connection objects. Operations which proceed relative to a particular Mapper will consult this dictionary for the direct Mapper instance as well as the mappers mapped_table attribute in order to locate an connectable to use. The full resolution is described in the get_bind() method of Session. Usage looks like: Session = sessionmaker(binds={ SomeMappedClass: create_engine(postgresql://engine1), somemapper: create_engine(postgresql://engine2), some_table: create_engine(postgresql://engine3), }) Also see the Session.bind_mapper() and Session.bind_table() methods. class_ Specify an alternate class other than sqlalchemy.orm.session.Session which should be used by the returned class. This is the only argument that is local to the sessionmaker() function, and is not sent directly to the constructor for Session. _enable_transaction_accounting Defaults to True. A legacy-only ag which when False disables all 0.5-style object accounting on transaction boundaries, including autoexpiry of instances on rollback and commit, maintenance of the new and deleted lists
134
upon rollback, and autoush of pending changes upon begin(), all of which are interdependent. expire_on_commit Defaults to True. When True, all instances will be fully expired after each commit(), so that all attribute/object access subsequent to a completed transaction will load from the most recent database state. extension An optional SessionExtension instance, or a list of such instances, which will receive pre- and post- commit and ush events, as well as a post-rollback event. Deprecated. Please see SessionEvents. query_cls Class which should be used to create new Query objects, as returned by the query() method. Defaults to Query. twophase When True, all transactions will be started as a two phase transaction, i.e. using the two phase semantics of the database in use along with an XID. During a commit(), after flush() has been issued for all attached databases, the prepare() method on each databases TwoPhaseTransaction will be called. This allows each database to roll back the entire transaction, before each transaction is committed. weak_identity_map Defaults to True - when set to False, objects placed in the Session will be strongly referenced until explicitly removed or the Session is closed. Deprecated - this option is obsolete. add(instance) Place an object in the Session. Its state will be persisted to the database on the next ush operation. Repeated calls to add() will be ignored. The opposite of add() is expunge(). add_all(instances) Add the given collection of instances to this Session. begin(subtransactions=False, nested=False) Begin a transaction on this Session. If this Session is already within a transaction, either a plain transaction or nested transaction, an error is raised, unless subtransactions=True or nested=True is specied. The subtransactions=True ag indicates that this begin() can create a subtransaction if a transaction is already in progress. For documentation on subtransactions, please see Using Subtransactions with Autocommit. The nested ag begins a SAVEPOINT transaction and is equivalent to calling begin_nested(). For documentation on SAVEPOINT transactions, please see Using SAVEPOINT . begin_nested() Begin a nested transaction on this Session. The target database(s) must support SQL SAVEPOINTs or a SQLAlchemy-supported vendor implementation of the idea. For documentation on SAVEPOINT transactions, please see Using SAVEPOINT . bind_mapper(mapper, bind) Bind operations for a mapper to a Connectable. mapper A mapper instance or mapped class bind Any Connectable: a Engine or Connection. All subsequent operations involving this mapper will use the given bind.
135
bind_table(table, bind) Bind operations on a Table to a Connectable. table A Table instance bind Any Connectable: a Engine or Connection. All subsequent operations involving this Table will use the given bind. close() Close this Session. This clears all items and ends any transaction in progress. If this session were created with autocommit=False, a new transaction is immediately begun. Note that this new transaction does not use any connection resources until they are rst needed. classmethod close_all() Close all sessions in memory. commit() Flush pending changes and commit the current transaction. If no transaction is in progress, this method raises an InvalidRequestError. By default, the Session also expires all database loaded state on all ORM-managed attributes after transaction commit. This so that subsequent operations load the most recent data from the database. This behavior can be disabled using the expire_on_commit=False option to sessionmaker() or the Session constructor. If a subtransaction is in effect (which occurs when begin() is called multiple times), the subtransaction will be closed, and the next call to commit() will operate on the enclosing transaction. For a session congured with autocommit=False, a new transaction will be begun immediately after the commit, but note that the newly begun transaction does not use any connection resources until the rst SQL is actually emitted. connection(mapper=None, clause=None, bind=None, close_with_result=False, **kw) Return a Connection object corresponding to this Session objects transactional state. If this Session is congured with autocommit=False, either the Connection corresponding to the current transaction is returned, or if no transaction is in progress, a new one is begun and the Connection returned (note that no transactional state is established with the DBAPI until the rst SQL statement is emitted). Alternatively, if this Session is congured with autocommit=True, an ad-hoc Connection is returned using Engine.contextual_connect() on the underlying Engine. Ambiguity in multi-bind or unbound Session objects can be resolved through any of the optional keyword arguments. This ultimately makes usage of the get_bind() method for resolution. Parameters bind Optional Engine to be used as the bind. If this engine is already involved in an ongoing transaction, that connection will be used. This argument takes precedence over mapper, clause. mapper Optional mapper() mapped class, used to identify the appropriate bind. This argument takes precedence over clause. clause A ClauseElement (i.e. select(), text(), etc.) which will be used to locate a bind, if a bind cannot otherwise be identied.
136
close_with_result Passed to Engine.connect(), indicating the Connection should be considered single use, automatically closing when the rst result set is closed. This ag only has an effect if this Session is congured with autocommit=True and does not already have a transaction in progress. **kw Additional keyword arguments are sent to get_bind(), allowing additional arguments to be passed to custom implementations of get_bind(). delete(instance) Mark an instance as deleted. The database delete operation occurs upon flush(). deleted The set of all instances marked as deleted within this Session dirty The set of all persistent instances considered dirty. Instances are considered dirty when they were modied but not deleted. Note that this dirty calculation is optimistic; most attribute-setting or collection modication operations will mark an instance as dirty and place it in this set, even if there is no net change to the attributes value. At ush time, the value of each attribute is compared to its previously saved value, and if theres no net change, no SQL operation will occur (this is a more expensive operation so its only done at ush time). To check if an instance has actionable net changes to its attributes, use the is_modied() method. execute(clause, params=None, mapper=None, bind=None, **kw) Execute a clause within the current transaction. Returns a ResultProxy representing results of the statement execution, in the same manner as that of an Engine or Connection. execute() accepts any executable clause construct, such as select(), insert(), update(), delete(), and text(), and additionally accepts plain strings that represent SQL statements. If a plain string is passed, it is rst converted to a text() construct, which here means that bind parameters should be specied using the format :param. If raw DBAPI statement execution is desired, use Session.connection() to acquire a Connection, then call its execute() method. The statement is executed within the current transactional context of this Session, using the same behavior as that of the Session.connection() method to determine the active Connection. The close_with_result ag is set to True so that an autocommit=True Session with no active transaction will produce a result that auto-closes the underlying Connection. Parameters clause A ClauseElement (i.e. select(), text(), etc.) or string SQL statement to be executed. The clause will also be used to locate a bind, if this Session is not bound to a single engine already, and the mapper and bind arguments are not passed. params Optional dictionary of bind names mapped to values. mapper Optional mapper() or mapped class, used to identify the appropriate bind. This argument takes precedence over clause when locating a bind. bind Optional Engine to be used as the bind. If this engine is already involved in an ongoing transaction, that connection will be used. This argument takes precedence over mapper and clause when locating a bind. **kw Additional keyword arguments are sent to get_bind(), allowing additional arguments to be passed to custom implementations of get_bind().
137
expire(instance, attribute_names=None) Expire the attributes on an instance. Marks the attributes of an instance as out of date. When an expired attribute is next accessed, a query will be issued to the Session objects current transactional context in order to load all expired attributes for the given instance. Note that a highly isolated transaction will return the same values as were previously read in that same transaction, regardless of changes in database state outside of that transaction. To expire all objects in the Session simultaneously, use Session.expire_all(). The Session objects default behavior is to expire all state whenever the Session.rollback() or Session.commit() methods are called, so that new state can be loaded for the new transaction. For this reason, calling Session.expire() only makes sense for the specic case that a non-ORM SQL statement was emitted in the current transaction. Parameters instance The instance to be refreshed. attribute_names optional list of string attribute names indicating a subset of attributes to be expired. expire_all() Expires all persistent instances within this Session. When any attributes on a persistent instance is next accessed, a query will be issued using the Session objects current transactional context in order to load all expired attributes for the given instance. Note that a highly isolated transaction will return the same values as were previously read in that same transaction, regardless of changes in database state outside of that transaction. To expire individual objects and individual attributes on those objects, use Session.expire(). The Session objects default behavior is to expire all state whenever the Session.rollback() or Session.commit() methods are called, so that new state can be loaded for the new transaction. For this reason, calling Session.expire_all() should not be needed when autocommit is False, assuming the transaction is isolated. expunge(instance) Remove the instance from this Session. This will free all internal references to the instance. Cascading will be applied according to the expunge cascade rule. expunge_all() Remove all object instances from this Session. This is equivalent to calling expunge(obj) on all objects in this Session. flush(objects=None) Flush all the object changes to the database. Writes out all pending object creations, deletions and modications to the database as INSERTs, DELETEs, UPDATEs, etc. Operations are automatically ordered by the Sessions unit of work dependency solver. Database operations will be issued in the current transactional context and do not affect the state of the transaction, unless an error occurs, in which case the entire transaction is rolled back. You may ush() as often as you like within a transaction to move changes from Python to the databases transaction buffer. For autocommit Sessions with no active manual transaction, ush() will create a transaction on the y that surrounds the entire set of operations int the ush.
138
objects Optional; a list or tuple collection. Restricts the ush operation to only these objects, rather than all pending changes. Deprecated - this ag prevents the session from properly maintaining accounting among inter-object relations and can cause invalid results. get_bind(mapper=None, clause=None) Return a bind to which this Session is bound. The bind is usually an instance of Engine, except in the case where the Session has been explicitly bound directly to a Connection. For a multiply-bound or unbound Session, the mapper or clause arguments are used to determine the appropriate bind to return. Note that the mapper argument is usually present when Session.get_bind() is called via an ORM operation such as a Session.query(), each individual INSERT/UPDATE/DELETE operation within a Session.flush(), call, etc. The order of resolution is: 1.if mapper given and session.binds is present, locate a bind based on mapper. 2.if clause given and session.binds is present, locate a bind based on Table objects found in the given clause present in session.binds. 3.if session.bind is present, return that. 4.if clause given, attempt to return a bind linked to the MetaData ultimately associated with the clause. 5.if mapper given, attempt to return a bind linked to the MetaData ultimately associated with the Table or other selectable to which the mapper is mapped. 6.No bind can be found, UnboundExecutionError is raised. Parameters mapper Optional mapper() mapped class or instance of Mapper. The bind can be derived from a Mapper rst by consulting the binds map associated with this Session, and secondly by consulting the MetaData associated with the Table to which the Mapper is mapped for a bind. clause A ClauseElement (i.e. select(), text(), etc.). If the mapper argument is not present or could not produce a bind, the given expression construct will be searched for a bound element, typically a Table associated with bound MetaData. is_active True if this Session has an active transaction. This indicates if the Session is capable of emitting SQL, as from the Session.execute(), Session.query(), or Session.flush() methods. If False, it indicates that the innermost transaction has been rolled back, but enclosing SessionTransaction objects remain in the transactional stack, which also must be rolled back. This ag is generally only useful with a Session congured in its default mode of autocommit=False. is_modified(instance, include_collections=True, passive=<symbol PASSIVE_OFF>) Return True if instance has modied attributes. This method retrieves a history instance for each instrumented attribute on the instance and performs a comparison of the current value to its previously committed value.
139
include_collections indicates if multivalued collections should be included in the operation. Setting this to False is a way to detect only local-column based properties (i.e. scalar columns or many-to-one foreign keys) that would result in an UPDATE for this instance upon ush. The passive ag indicates if unloaded attributes and collections should not be loaded in the course of performing this test. Allowed values include PASSIVE_OFF, PASSIVE_NO_INITIALIZE. A few caveats to this method apply: Instances present in the dirty collection may result in a value of False when tested with this method. This because while the object may have received attribute set events, there may be no net changes on its state. Scalar attributes may not have recorded the previously set value when a new value was applied, if the attribute was not loaded, or was expired, at the time the new value was received - in these cases, the attribute is assumed to have a change, even if there is ultimately no net change against its database value. SQLAlchemy in most cases does not need the old value when a set event occurs, so it skips the expense of a SQL call if the old value isnt present, based on the assumption that an UPDATE of the scalar value is usually needed, and in those few cases where it isnt, is less expensive on average than issuing a defensive SELECT. The old value is fetched unconditionally only if the attribute container has the active_history ag set to True. This ag is set typically for primary key attributes and scalar references that are not a simple many-to-one. merge(instance, load=True, **kw) Copy the state an instance onto the persistent instance with the same identier. If there is no persistent instance currently associated with the session, it will be loaded. Return the persistent instance. If the given instance is unsaved, save a copy of and return it as a newly persistent instance. The given instance does not become associated with the session. This operation cascades to associated instances if the association is mapped with cascade="merge". See Merging for a detailed discussion of merging. new The set of all instances marked as new within this Session. classmethod object_session(instance) Return the Session to which an object belongs. prepare() Prepare the current transaction in progress for two phase commit. If no transaction is in progress, this method raises an InvalidRequestError. Only root transactions of two phase sessions can be prepared. If the current transaction is not such, an InvalidRequestError is raised. prune() Remove unreferenced instances cached in the identity map. Deprecated since version 0.7: The nonweak-referencing identity map feature is no longer needed. Note that this method is only meaningful if weak_identity_map is set to False. The default weak identity map is self-pruning. Removes any object in this Sessions identity map that is not referenced in user code, modied, new or scheduled for deletion. Returns the number of objects pruned. query(*entities, **kwargs) Return a new Query object corresponding to this Session. refresh(instance, attribute_names=None, lockmode=None) Expire and refresh the attributes on the given instance.
140
A query will be issued to the database and all attributes will be refreshed with their current database value. Lazy-loaded relational attributes will remain lazily loaded, so that the instance-wide refresh operation will be followed immediately by the lazy load of that attribute. Eagerly-loaded relational attributes will eagerly load within the single refresh operation. Note that a highly isolated transaction will return the same values as were previously read in that same transaction, regardless of changes in database state outside of that transaction - usage of refresh() usually only makes sense if non-ORM SQL statement were emitted in the ongoing transaction, or if autocommit mode is turned on. Parameters attribute_names optional. An iterable collection of string attribute names indicating a subset of attributes to be refreshed. lockmode Passed to the Query as used by with_lockmode(). rollback() Rollback the current transaction in progress. If no transaction is in progress, this method is a pass-through. This method rolls back the current transaction or nested transaction regardless of subtransactions being in effect. All subtransactions up to the rst real transaction are closed. Subtransactions occur when begin() is called multiple times. scalar(clause, params=None, mapper=None, bind=None, **kw) Like execute() but return a scalar result. transaction The current active or inactive SessionTransaction. class sqlalchemy.orm.session.SessionTransaction(session, parent=None, nested=False) A Session-level transaction. This corresponds to one or more Core Transaction instances behind the scenes, with one Transaction per Engine in use. Direct usage of SessionTransaction is not typically necessary as of SQLAlchemy 0.4; use the Session.rollback() and Session.commit() methods on Session itself to control the transaction. The current instance of SessionTransaction for a given Session is available via the Session.transaction attribute. The SessionTransaction object is not thread-safe. See also: Session.rollback() Session.commit() Session.is_active SessionEvents.after_commit() SessionEvents.after_rollback() SessionEvents.after_soft_rollback()
141
Session Utilites sqlalchemy.orm.session.make_transient(instance) Make the given instance transient. This will remove its association with any session and additionally will remove its identity key, such that its as though the object were newly constructed, except retaining its values. It also resets the deleted ag on the state if this object had been explicitly deleted by its session. Attributes which were expired or deferred at the instance level are reverted to undened, and will not trigger any loads. sqlalchemy.orm.session.object_session(instance) Return the Session to which instance belongs. If the instance is not a mapped instance, an error is raised. Attribute and State Management Utilities These functions are provided by the SQLAlchemy attribute instrumentation API to provide a detailed interface for dealing with instances, attribute values, and history. Some of them are useful when constructing event listener functions, such as those described in events_orm_toplevel. sqlalchemy.orm.attributes.del_attribute(instance, key) Delete the value of an attribute, ring history events. This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are required. Custom attribute management schemes will need to make usage of this method to establish attribute state as understood by SQLAlchemy. sqlalchemy.orm.attributes.get_attribute(instance, key) Get the value of an attribute, ring any callables required. This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are required. Custom attribute management schemes will need to make usage of this method to make usage of attribute state as understood by SQLAlchemy. sqlalchemy.orm.attributes.get_history(obj, key, passive=<symbol PASSIVE_OFF>) Return a History record for the given object and attribute key. Parameters obj an object whose class is instrumented by the attributes package. key string attribute name. passive indicates if the attribute should be loaded from the database if not already present (PASSIVE_NO_FETCH), and if the attribute should be not initialized to a blank value otherwise (PASSIVE_NO_INITIALIZE). Default is PASSIVE_OFF. sqlalchemy.orm.attributes.init_collection(obj, key) Initialize a collection attribute and return the collection adapter. This function is used to provide direct access to collection internals for a previously unloaded attribute. e.g.: collection_adapter = init_collection(someobject, elements) for elem in values: collection_adapter.append_without_event(elem) For an easier way to do the above, see set_committed_value().
142
obj is an instrumented object instance. An InstanceState is accepted directly for backwards compatibility but this usage is deprecated. sqlalchemy.orm.attributes.flag_modified(instance, key) Mark an attribute on an instance as modied. This sets the modied ag on the instance and establishes an unconditional change event for the given attribute. sqlalchemy.orm.attributes.instance_state() Return the InstanceState for a given object. sqlalchemy.orm.attributes.manager_of_class() Return the ClassManager for a given class. sqlalchemy.orm.attributes.set_attribute(instance, key, value) Set the value of an attribute, ring history events. This function may be used regardless of instrumentation applied directly to the class, i.e. no descriptors are required. Custom attribute management schemes will need to make usage of this method to establish attribute state as understood by SQLAlchemy. sqlalchemy.orm.attributes.set_committed_value(instance, key, value) Set the value of an attribute with no history events. Cancels any previous history present. The value should be a scalar value for scalar-holding attributes, or an iterable for any collection-holding attribute. This is the same underlying method used when a lazy loader res off and loads additional data from the database. In particular, this method can be used by application code which has loaded additional attributes or collections through separate queries, which can then be attached to an instance as though it were part of its original loaded state. class sqlalchemy.orm.attributes.History A 3-tuple of added, unchanged and deleted values, representing the changes which have occurred on an instrumented attribute. Each tuple member is an iterable sequence. added Return the collection of items added to the attribute (the rst tuple element). deleted Return the collection of items that have been removed from the attribute (the third tuple element). empty() Return True if this History has no changes and no existing, unchanged state. has_changes() Return True if this History has changes. non_added() Return a collection of unchanged + deleted. non_deleted() Return a collection of added + unchanged. sum() Return a collection of added + unchanged + deleted. unchanged Return the collection of items that have not changed on the attribute (the second tuple element).
143
sqlalchemy.orm.attributes.PASSIVE_NO_INITIALIZE Symbol indicating that loader callables should not be red off, and a non-initialized attribute should remain that way. sqlalchemy.orm.attributes.PASSIVE_NO_FETCH Symbol indicating that loader callables should not emit SQL, but a value can be fetched from the current session. Non-initialized attributes should be initialized to an empty value. sqlalchemy.orm.attributes.PASSIVE_NO_FETCH_RELATED Symbol indicating that loader callables should not emit SQL for loading a related object, but can refresh the attributes of the local instance in order to locate a related object in the current session. Non-initialized attributes should be initialized to an empty value. The unit of work uses this mode to check if history is present on many-to-one attributes with minimal SQL emitted. sqlalchemy.orm.attributes.PASSIVE_ONLY_PERSISTENT Symbol indicating that loader callables should only re off for parent objects which are persistent (i.e., have a database identity). Load operations for the previous value of an attribute make use of this ag during change events. sqlalchemy.orm.attributes.PASSIVE_OFF Symbol indicating that loader callables should be executed normally.
2.7 Querying
This section provides API documentation for the Query object and related constructs. For an in-depth introduction to querying with the SQLAlchemy ORM, please see the Object Relational Tutorial.
144
add_entity(entity, alias=None) add a mapped entity to the list of result columns to be returned. all() Return the results represented by this Query as a list. This results in an execution of the underlying query. as_scalar() Return the full SELECT statement represented by this Query, converted to a scalar subquery. Analogous to sqlalchemy.sql._SelectBaseMixin.as_scalar(). New in 0.6.5. autoflush(setting) Return a Query with a specic autoush setting. Note that a Session with autoush=False will not autoush, even if this ag is set to True at the Query level. Therefore this ag is usually used only to disable autoush for a specic Query. column_descriptions Return metadata about the columns which would be returned by this Query. Format is a list of dictionaries: user_alias = aliased(User, name=user2) q = sess.query(User, User.id, user_alias) # this expression: q.column_descriptions # would return: [ { name:User, type:User, aliased:False, expr:User, }, { name:id, type:Integer(), aliased:False, expr:User.id, }, { name:user2, type:User, aliased:True, expr:user_alias } ] correlate(*args) Return a Query construct which will correlate the given FROM clauses to that of an enclosing Query or select().
2.7. Querying
145
The method here accepts mapped classes, aliased() constructs, and mapper() constructs as arguments, which are resolved into expression constructs, in addition to appropriate expression constructs. The correlation arguments are ultimately passed to Select.correlate() after coercion to expression constructs. The correlation arguments take effect in such cases as when Query.from_self() is used, or when a subquery as returned by Query.subquery() is embedded in another select() construct. count() Return a count of rows this Query would return. This generates the SQL for this Query as follows: SELECT count(1) AS count_1 FROM ( SELECT <rest of query follows...> ) AS anon_1 Note the above scheme is newly rened in 0.7 (as of 0.7b3). For ne grained control over specic columns to count, to skip the usage of a subquery or otherwise control of the FROM clause, or to use other aggregate functions, use func expressions in conjunction with query(), i.e.: from sqlalchemy import func # count User records, without # using a subquery. session.query(func.count(User.id)) # return count of user "id" grouped # by "name" session.query(func.count(User.id)).\ group_by(User.name) from sqlalchemy import distinct # count distinct "name" values session.query(func.count(distinct(User.name))) delete(synchronize_session=evaluate) Perform a bulk delete query. Deletes rows matched by this query from the database. Parameters synchronize_session chooses the strategy for the removal of matched objects from the session. Valid values are: False - dont synchronize the session. This option is the most efcient and is reliable once the session is expired, which typically occurs after a commit(), or explicitly using expire_all(). Before the expiration, objects may still remain in the session which were in fact deleted which can lead to confusing results if they are accessed via get() or already loaded collections. fetch - performs a select query before the delete to nd objects that are matched by the delete query and need to be removed from the session. Matched objects are removed from the session.
146
evaluate - Evaluate the querys criteria in Python straight on the objects in the session. If evaluation of the criteria isnt implemented, an error is raised. In that case you probably want to use the fetch strategy as a fallback. The expression evaluator currently doesnt account for differing string collations between the database and Python. Returns the number of rows deleted, excluding any cascades. The method does not offer in-Python cascading of relationships - it is assumed that ON DELETE CASCADE is congured for any foreign key references which require it. The Session needs to be expired (occurs automatically after commit(), or call expire_all()) in order for the state of dependent objects subject to delete or delete-orphan cascade to be correctly represented. Also, the before_delete() and after_delete() MapperExtension methods are not called from this method. For a delete hook here, use the SessionExtension.after_bulk_delete() event hook. distinct(*criterion) Apply a DISTINCT to the query and return the newly resulting Query. Parameters *expr optional column expressions. When present, the Postgresql dialect will render a DISTINCT ON (<expressions>>) construct. enable_assertions(value) Control whether assertions are generated. When set to False, the returned Query will not assert its state before certain operations, including that LIMIT/OFFSET has not been applied when lter() is called, no criterion exists when get() is called, and no from_statement() exists when lter()/order_by()/group_by() etc. is called. This more permissive mode is used by custom Query subclasses to specify criterion or other modiers outside of the usual usage patterns. Care should be taken to ensure that the usage pattern is even possible. A statement applied by from_statement() will override any criterion set by lter() or order_by(), for example. enable_eagerloads(value) Control whether or not eager joins and subqueries are rendered. When set to False, the returned Query will not render eager joins regardless of joinedload(), subqueryload() options or mapper-level lazy=joined/lazy=subquery congurations. This is used primarily when nesting the Querys statement into a subquery or other selectable. except_(*q) Produce an EXCEPT of this Query against one or more queries. Works the same way as union(). See that method for usage examples. except_all(*q) Produce an EXCEPT ALL of this Query against one or more queries. Works the same way as union(). See that method for usage examples. execution_options(**kwargs) Set non-SQL options which take effect during execution. The options are the same as those accepted by Connection.execution_options(). Note that the stream_results execution option is enabled automatically if the yield_per() method is used. filter(criterion) apply the given ltering criterion to the query and return the newly resulting Query
2.7. Querying
147
the criterion is any sql.ClauseElement applicable to the WHERE clause of a select. filter_by(**kwargs) apply the given ltering criterion to the query and return the newly resulting Query. first() Return the rst result of this Query or None if the result doesnt contain any row. rst() applies a limit of one within the generated SQL, so that only one primary entity row is generated on the server side (note this may consist of multiple result rows if join-loaded collections are present). Calling first() results in an execution of the underlying query. from_self(*entities) return a Query that selects from this Querys SELECT statement. *entities - optional list of entities which will replace those being selected. from_statement(statement) Execute the given SELECT statement and return results. This method bypasses all internal statement compilation, and the statement is executed without modication. The statement argument is either a string, a select() construct, or a text() construct, and should return the set of columns appropriate to the entity class represented by this Query. get(ident) Return an instance based on the given primary key identier, or None if not found. E.g.: my_user = session.query(User).get(5) some_object = session.query(VersionedFoo).get((5, 10)) get() is special in that it provides direct access to the identity map of the owning Session. If the given primary key identier is present in the local identity map, the object is returned directly from this collection and no SQL is emitted, unless the object has been marked fully expired. If not present, a SELECT is performed in order to locate the object. get() also will perform a check if the object is present in the identity map and marked as expired - a SELECT is emitted to refresh the object as well as to ensure that the row is still present. If not, ObjectDeletedError is raised. get() is only used to return a single mapped instance, not multiple instances or individual column constructs, and strictly on a single primary key value. The originating Query must be constructed in this way, i.e. against a single mapped entity, with no additional ltering criterion. Loading options via options() may be applied however, and will be used if the object is not yet locally present. A lazy-loading, many-to-one attribute congured by relationship(), using a simple foreign-key-toprimary-key criterion, will also use an operation equivalent to get() in order to retrieve the target value from the local identity map before querying the database. See Relationship Loading Techniques for further details on relationship loading. Parameters ident A scalar or tuple value representing the primary key. For a composite primary key, the order of identiers corresponds in most cases to that of the mapped Table objects primary key columns. For a mapper() that was given the primary key argument during construction, the order of identiers corresponds to the elements present in this collection. Returns The object instance, or None.
148
group_by(*criterion) apply one or more GROUP BY criterion to the query and return the newly resulting Query having(criterion) apply a HAVING criterion to the query and return the newly resulting Query. instances(cursor, _Query__context=None) Given a ResultProxy cursor as returned by connection.execute(), return an ORM result as an iterator. e.g.: result = engine.execute("select * from users") for u in session.query(User).instances(result): print u intersect(*q) Produce an INTERSECT of this Query against one or more queries. Works the same way as union(). See that method for usage examples. intersect_all(*q) Produce an INTERSECT ALL of this Query against one or more queries. Works the same way as union(). See that method for usage examples. join(*props, **kwargs) Create a SQL JOIN against this Query objects criterion and apply generatively, returning the newly resulting Query. Simple Relationship Joins Consider a mapping between two classes User and Address, with a relationship User.addresses representing a collection of Address objects associated with each User. The most common usage of join() is to create a JOIN along this relationship, using the User.addresses attribute as an indicator for how this should occur: q = session.query(User).join(User.addresses) Where above, the call to join() along User.addresses will result in SQL equivalent to: SELECT user.* FROM user JOIN address ON user.id = address.user_id In the above example we refer to User.addresses as passed to join() as the on clause, that is, it indicates how the ON portion of the JOIN should be constructed. For a single-entity query such as the one above (i.e. we start by selecting only from User and nothing else), the relationship can also be specied by its string name: q = session.query(User).join("addresses") join() can also accommodate multiple on clause arguments to produce a chain of joins, such as below where a join across four related entities is constructed: q = session.query(User).join("orders", "items", "keywords") The above would be shorthand for three separate calls to join(), each using an explicit attribute to indicate the source entity:
2.7. Querying
149
q = session.query(User).\ join(User.orders).\ join(Order.items).\ join(Item.keywords) Joins to a Target Entity or Selectable A second form of join() allows any mapped entity or core selectable construct as a target. In this usage, join() will attempt to create a JOIN along the natural foreign key relationship between two entities: q = session.query(User).join(Address) The above calling form of join() will raise an error if either there are no foreign keys between the two entities, or if there are multiple foreign key linkages between them. In the above calling form, join() is called upon to create the on clause automatically for us. The target can be any mapped entity or selectable, such as a Table: q = session.query(User).join(addresses_table) Joins to a Target with an ON Clause The third calling form allows both the target entity as well as the ON clause to be passed explicitly. Suppose for example we wanted to join to Address twice, using an alias the second time. We use aliased() to create a distinct alias of Address, and join to it using the target, onclause form, so that the alias can be specied explicitly as the target along with the relationship to instruct how the ON clause should proceed: a_alias = aliased(Address) q = session.query(User).\ join(User.addresses).\ join(a_alias, User.addresses).\ filter([email protected]).\ filter([email protected]) Where above, the generated SQL would be similar to: SELECT user.* FROM user JOIN address ON user.id = address.user_id JOIN address AS address_1 ON user.id=address_1.user_id WHERE address.email_address = :email_address_1 AND address_1.email_address = :email_address_2 The two-argument calling form of join() also allows us to construct arbitrary joins with SQL-oriented on clause expressions, not relying upon congured relationships at all. Any SQL expression can be passed as the ON clause when using the two-argument form, which should refer to the target entity in some way as well as an applicable source entity: q = session.query(User).join(Address, User.id==Address.user_id) Note: In SQLAlchemy 0.6 and earlier, the two argument form of join() requires the usage of a tuple: query(User).join((Address, User.id==Address.user_id))
150
This calling form is accepted in 0.7 and further, though is not necessary unless multiple join conditions are passed to a single join() call, which itself is also not generally necessary as it is now equivalent to multiple calls (this wasnt always the case). Advanced Join Targeting and Adaption There is a lot of exibility in what the target can be when using join(). As noted previously, it also accepts Table constructs and other selectables such as alias() and select() constructs, with either the one or two-argument forms: addresses_q = select([Address.user_id]).\ filter(Address.email_address.endswith("@bar.com")).\ alias() q = session.query(User).\ join(addresses_q, addresses_q.c.user_id==User.id) join() also features the ability to adapt a relationship() -driven ON clause to the target selectable. Below we construct a JOIN from User to a subquery against Address, allowing the relationship denoted by User.addresses to adapt itself to the altered target: address_subq = session.query(Address).\ filter(Address.email_address == [email protected]).\ subquery() q = session.query(User).join(address_subq, User.addresses) Producing SQL similar to: SELECT user.* FROM user JOIN ( SELECT address.id AS id, address.user_id AS user_id, address.email_address AS email_address FROM address WHERE address.email_address = :email_address_1 ) AS anon_1 ON user.id = anon_1.user_id The above form allows one to fall back onto an explicit ON clause at any time: q = session.query(User).\ join(address_subq, User.id==address_subq.c.user_id) Controlling what to Join From While join() exclusively deals with the right side of the JOIN, we can also control the left side, in those cases where its needed, using select_from(). Below we construct a query against Address but can still make usage of User.addresses as our ON clause by instructing the Query to select rst from the User entity: q = session.query(Address).select_from(User).\ join(User.addresses).\ filter(User.name == ed) Which will produce SQL similar to:
2.7. Querying
151
SELECT address.* FROM user JOIN address ON user.id=address.user_id WHERE user.name = :name_1 Constructing Aliases Anonymously join() can construct anonymous aliases using the aliased=True ag. This feature is useful when a query is being joined algorithmically, such as when querying self-referentially to an arbitrary depth: q = session.query(Node).\ join("children", "children", aliased=True) When aliased=True is used, the actual alias construct is not explicitly available. To work with it, methods such as Query.filter() will adapt the incoming entity to the last join point: q = session.query(Node).\ join("children", "children", aliased=True).\ filter(Node.name == grandchild 1) When using automatic aliasing, the from_joinpoint=True argument can allow a multi-node join to be broken into multiple calls to join(), so that each path along the way can be further ltered: q = session.query(Node).\ join("children", aliased=True).\ filter(Node.name=child 1).\ join("children", aliased=True, from_joinpoint=True).\ filter(Node.name == grandchild 1) The ltering aliases above can then be reset back to the original Node entity using reset_joinpoint(): q = session.query(Node).\ join("children", "children", aliased=True).\ filter(Node.name == grandchild 1).\ reset_joinpoint().\ filter(Node.name == parent 1) For an example of aliased=True, see the distribution example XML Persistence which illustrates an XPath-like query system using algorithmic joins. Parameters *props A collection of one or more join conditions, each consisting of a relationshipbound attribute or string relationship name representing an on clause, or a single target entity, or a tuple in the form of (target, onclause). A special two-argument calling form of the form target, onclause is also accepted. aliased=False If True, indicate that the JOIN target should be anonymously aliased. Subsequent calls to filter and similar will adapt the incoming criterion to the target alias, until reset_joinpoint() is called. from_joinpoint=False When using aliased=True, a setting of True here will cause the join to be from the most recent joined target, rather than starting back from the original FROM clauses of the query. See also:
152
Querying with Joins in the ORM tutorial. Mapping Class Inheritance Hierarchies for details on how join() is used for inheritance relationships. orm.join() - a standalone ORM-level join function, used internally by Query.join(), which in previous SQLAlchemy versions was the primary ORM-level joining interface. label(name) Return the full SELECT statement represented by this Query, converted to a scalar subquery with a label of the given name. Analogous to sqlalchemy.sql._SelectBaseMixin.label(). New in 0.6.5. limit(limit) Apply a LIMIT to the query and return the newly resulting Query. merge_result(iterator, load=True) Merge a result into this Query objects Session. Given an iterator returned by a Query of the same structure as this one, return an identical iterator of results, with all mapped instances merged into the session using Session.merge(). This is an optimized method which will merge all mapped instances, preserving the structure of the result rows and unmapped columns with less method overhead than that of calling Session.merge() explicitly for each value. The structure of the results is determined based on the column list of this Query - if these do not correspond, unchecked errors will occur. The load argument is the same as that of Session.merge(). For an example of how merge_result() is used, see the source code for the example Beaker Caching, where merge_result() is used to efciently restore state from a cache back into a target Session. offset(offset) Apply an OFFSET to the query and return the newly resulting Query. one() Return exactly one result or raise an exception. Raises sqlalchemy.orm.exc.NoResultFound if the query selects no rows. Raises sqlalchemy.orm.exc.MultipleResultsFound if multiple object identities are returned, or if multiple rows are returned for a query that does not return object identities. Note that an entity query, that is, one which selects one or more mapped classes as opposed to individual column attributes, may ultimately represent many rows but only one row of unique entity or entities - this is a successful result for one(). Calling one() results in an execution of the underlying query. As of 0.6, one() fully fetches all results instead of applying any kind of limit, so that the unique-ing of entities does not conceal multiple object identities. options(*args) Return a new Query object, applying the given list of mapper options. Most supplied options regard changing how column- and relationship-mapped attributes are loaded. See the sections Deferred Column Loading and Relationship Loading Techniques for reference documentation. order_by(*criterion) apply one or more ORDER BY criterion to the query and return the newly resulting Query
2.7. Querying
153
All existing ORDER BY settings can be suppressed by passing None - this will suppress any ORDER BY congured on mappers as well. Alternatively, an existing ORDER BY setting on the Query object can be entirely cancelled by passing False as the value - use this before calling methods where an ORDER BY is invalid. outerjoin(*props, **kwargs) Create a left outer join against this Query objects criterion and apply generatively, returning the newly resulting Query. Usage is the same as the join() method. params(*args, **kwargs) add values for bind parameters which may have been specied in lter(). parameters may be specied using **kwargs, or optionally a single dictionary as the rst positional argument. The reason for both is that **kwargs is convenient, however some parameter dictionaries contain unicode keys in which case **kwargs cannot be used. populate_existing() Return a Query that will expire and refresh all instances as they are loaded, or reused from the current Session. populate_existing() does not improve behavior when the ORM is used normally - the Session objects usual behavior of maintaining a transaction and expiring all attributes after rollback or commit handles object state automatically. This method is not intended for general use. reset_joinpoint() Return a new Query, where the join point has been reset back to the base FROM entities of the query. This method is usually used in conjunction with the aliased=True feature of the join() method. See the example in join() for how this is used. scalar() Return the rst element of the rst result or None if no rows present. If multiple rows are returned, raises MultipleResultsFound. >>> session.query(Item).scalar() <Item> >>> session.query(Item.id).scalar() 1 >>> session.query(Item.id).filter(Item.id < 0).scalar() None >>> session.query(Item.id, Item.name).scalar() 1 >>> session.query(func.count(Parent.id)).scalar() 20 This results in an execution of the underlying query. select_from(*from_obj) Set the FROM clause of this Query explicitly. Sending a mapped class or entity here effectively replaces the left edge of any calls to join(), when no joinpoint is otherwise established - usually, the default join point is the leftmost entity in the Query objects list of entities to be selected. Mapped entities or plain Table or other selectables can be sent here which will form the default FROM clause. See the example in join() for a typical usage of select_from().
154
slice(start, stop) apply LIMIT/OFFSET to the Query based on a range and return the newly resulting Query. statement The full SELECT statement represented by this Query. The statement by default will not have disambiguating labels applied to the construct unless with_labels(True) is called rst. subquery(name=None) return the full SELECT statement represented by this Query, embedded within an Alias. Eager JOIN generation within the query is disabled. The statement will not have disambiguating labels applied to the list of selected columns unless the Query.with_labels() method is used to generate a new Query with the option enabled. Parameters name string name to be assigned as the alias; this is passed through to FromClause.alias(). If None, a name will be deterministically generated at compile time. union(*q) Produce a UNION of this Query against one or more queries. e.g.: q1 = sess.query(SomeClass).filter(SomeClass.foo==bar) q2 = sess.query(SomeClass).filter(SomeClass.bar==foo) q3 = q1.union(q2) The method accepts multiple Query objects so as to control the level of nesting. A series of union() calls such as: x.union(y).union(z).all() will nest on each union(), and produces: SELECT * FROM (SELECT * FROM (SELECT * FROM X UNION SELECT * FROM y) UNION SELECT * FROM Z) Whereas: x.union(y, z).all() produces: SELECT * FROM (SELECT * FROM X UNION SELECT * FROM y UNION SELECT * FROM Z) Note that many database backends do not allow ORDER BY to be rendered on a query called within UNION, EXCEPT, etc. To disable all ORDER BY clauses including those congured on mappers, issue query.order_by(None) - the resulting Query object will not render ORDER BY within its SELECT statement. union_all(*q) Produce a UNION ALL of this Query against one or more queries. Works the same way as union(). See that method for usage examples.
2.7. Querying
155
update(values, synchronize_session=evaluate) Perform a bulk update query. Updates rows matched by this query in the database. Parameters values a dictionary with attributes names as keys and literal values or sql expressions as values. synchronize_session chooses the strategy to update the attributes on objects in the session. Valid values are: False - dont synchronize the session. This option is the most efcient and is reliable once the session is expired, which typically occurs after a commit(), or explicitly using expire_all(). Before the expiration, updated objects may still remain in the session with stale values on their attributes, which can lead to confusing results. fetch - performs a select query before the update to nd objects that are matched by the update query. The updated attributes are expired on matched objects. evaluate - Evaluate the Querys criteria in Python straight on the objects in the session. If evaluation of the criteria isnt implemented, an exception is raised. The expression evaluator currently doesnt account for differing string collations between the database and Python. Returns the number of rows matched by the update. The method does not offer in-Python cascading of relationships - it is assumed that ON UPDATE CASCADE is congured for any foreign key references which require it. The Session needs to be expired (occurs automatically after commit(), or call expire_all()) in order for the state of dependent objects subject foreign key cascade to be correctly represented. Also, the before_update() and after_update() MapperExtension methods are not called from this method. For an update hook here, use the SessionExtension.after_bulk_update() event hook. value(column) Return a scalar result corresponding to the given column expression. values(*columns) Return an iterator yielding result tuples corresponding to the given list of columns whereclause A readonly attribute which returns the current WHERE criterion for this Query. This returned value is a SQL expression construct, or None if no criterion has been established. with_entities(*entities) Return a new Query replacing the SELECT list with the given entities. e.g.: # Users, filtered on some arbitrary criterion # and then ordered by related email address q = session.query(User).\ join(User.address).\ filter(User.name.like(%ed%)).\ order_by(Address.email) # given *only* User.id==5, Address.email, and q, what 156 Chapter 2. SQLAlchemy ORM
# would the *next* User in the result be ? subq = q.with_entities(Address.email).\ order_by(None).\ filter(User.id==5).\ subquery() q = q.join((subq, subq.c.email < Address.email)).\ limit(1) New in 0.6.5. with_hint(selectable, text, dialect_name=*) Add an indexing hint for the given entity or selectable to this Query. Functionality is passed straight through to with_hint(), with the addition that selectable can be a Table, Alias, or ORM entity / mapped class /etc. with_labels() Apply column labels to the return value of Query.statement. Indicates that this Querys statement accessor should return a SELECT statement that applies labels to all columns in the form <tablename>_<columnname>; this is commonly used to disambiguate columns from multiple tables which have the same name. When the Query actually issues SQL to load rows, it always uses column labeling. with_lockmode(mode) Return a new Query object with the specied locking mode. with_parent(instance, property=None) Add ltering criterion that relates the given instance to a child object or collection, using its attribute state as well as an established relationship() conguration. The method uses the with_parent() function to generate the clause, the result of which is passed to Query.filter(). Parameters are the same as with_parent(), with the exception that the given property can be None, in which case a search is performed against this Query objects target mapper. with_polymorphic(cls_or_mappers, selectable=None, discriminator=None) Load columns for descendant mappers of this Querys mapper. Using this method will ensure that each descendant mappers tables are included in the FROM clause, and will allow lter() criterion to be used against those tables. The resulting instances will also have those columns already loaded so that no post fetch of those columns will be required. Parameters cls_or_mappers a single class or mapper, or list of class/mappers, which inherit from this Querys mapper. Alternatively, it may also be the string *, in which case all descending mappers will be added to the FROM clause. selectable a table or select() statement that will be used in place of the generated FROM clause. This argument is required if any of the desired mappers use concrete table inheritance, since SQLAlchemy currently cannot generate UNIONs among tables automatically. If used, the selectable argument must represent the full set of tables and columns mapped by every desired mapper. Otherwise, the unaccounted mapped columns will result in their table being appended directly to the FROM clause which will usually lead to incorrect results. discriminator a column to be used as the discriminator column for the given selectable. If not given, the polymorphic_on attribute of the mapper will be used, if any.
2.7. Querying
157
This is useful for mappers that dont have polymorphic loading behavior by default, such as concrete table mappers. with_session(session) Return a Query that will use the given Session. yield_per(count) Yield only count rows at a time. WARNING: use this method with caution; if the same instance is present in more than one batch of rows, end-user changes to attributes will be overwritten. In particular, its usually impossible to use this setting with eagerly loaded collections (i.e. any lazy=joined or subquery) since those collections will be cleared for a new load when encountered in a subsequent result batch. In the case of subquery loading, the full result for all rows is fetched which generally defeats the purpose of yield_per(). Also note that many DBAPIs do not stream results, pre-buffering all rows before making them available, including mysql-python and psycopg2. yield_per() will also set the stream_results execution option to True, which currently is only understood by psycopg2 and causes server side cursors to be used.
158
to one on the entity. The use case for this is when associating an entity with some derived selectable such as one that uses aggregate functions: class UnitPrice(Base): __tablename__ = unit_price ... unit_id = Column(Integer) price = Column(Numeric) aggregated_unit_price = Session.query( func.sum(UnitPrice.price).label(price) ).group_by(UnitPrice.unit_id).subquery()
aggregated_unit_price = aliased(UnitPrice, alias=aggregated_unit_price, adapt Above, functions on aggregated_unit_price which refer to .price will return the fund.sum(UnitPrice.price).label(price) column, as it is matched on the name price. Ordinarily, the price function wouldnt have any column correspondence to the actual UnitPrice.price column as it is not a proxy of the original. adapt_on_names is new in 0.7.3. sqlalchemy.orm.join(left, right, onclause=None, isouter=False, join_to_left=True) Produce an inner join between left and right clauses. orm.join() is an extension to the core join interface provided by sql.expression.join(), where the left and right selectables may be not only core selectable objects such as Table, but also mapped classes or AliasedClass instances. The on clause can be a SQL expression, or an attribute or string name referencing a congured relationship(). join_to_left indicates to attempt aliasing the ON clause, in whatever form it is passed, to the selectable passed as the left side. If False, the onclause is used as is. orm.join() is not commonly needed in modern usage, as its functionality is encapsulated within that of the Query.join() method, which features a signicant amount of automation beyond orm.join() by itself. Explicit usage of orm.join() with Query involves usage of the Query.select_from() method, as in: from sqlalchemy.orm import join session.query(User).\ select_from(join(User, Address, User.addresses)).\ filter([email protected]) In modern SQLAlchemy the above join can be written more succinctly as: session.query(User).\ join(User.addresses).\ filter([email protected]) See Query.join() for information on modern usage of ORM level joins. sqlalchemy.orm.outerjoin(left, right, onclause=None, join_to_left=True) Produce a left outer join between left and right clauses. This is the outer join version of the orm.join() function, featuring the same behavior except that an OUTER JOIN is generated. See that functions documentation for other usage details.
2.7. Querying
159
sqlalchemy.orm.with_parent(instance, prop) Create ltering criterion that relates this querys primary entity to the given related instance, using established relationship() conguration. The SQL rendered is the same as that rendered when a lazy loader would re off from the given parent on that attribute, meaning that the appropriate state is taken from the parent object in Python without the need to render joins to the parent table in the rendered statement. As of 0.6.4, this method accepts parent instances in all persistence states, including transient, persistent, and detached. Only the requisite primary key/foreign key attributes need to be populated. Previous versions didnt work with transient instances. Parameters instance An instance which has some relationship(). property String property name, or class-bound attribute, which indicates what relationship from the instance should be used to reconcile the parent/child relationship.
>>> jack = session.query(User).\ ... options(joinedload(addresses)).\ ... filter_by(name=jack).all() SELECT addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_add addresses_1.user_id AS addresses_1_user_id, users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password
160
FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id WHERE users.name = ? [jack] In addition to joined eager loading, a second option for eager loading exists, called subquery eager loading. This kind of eager loading emits an additional SQL statement for each collection requested, aggregated across all parent objects: >>> jack = session.query(User).\ ... options(subqueryload(addresses)).\ ... filter_by(name=jack).all() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? (jack,) SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id, anon_1.users_id AS anon_1_users_id FROM (SELECT users.id AS users_id FROM users WHERE users.name = ?) AS anon_1 JOIN addresses ON anon_1.users_id = addresses.user_id ORDER BY anon_1.users_id, addresses.id (jack,) The default loader strategy for any relationship() is congured by the lazy keyword argument, which defaults to select - this indicates a select statement . Below we set it as joined so that the children relationship is eager loading, using a join: # load the children collection using LEFT OUTER JOIN mapper(Parent, parent_table, properties={ children: relationship(Child, lazy=joined) }) We can also set it to eagerly load using a second query for all collections, using subquery: # load the children attribute using a join to a subquery mapper(Parent, parent_table, properties={ children: relationship(Child, lazy=subquery) }) When querying, all three choices of loader strategy are available on a per-query basis, using the joinedload(), subqueryload() and lazyload() query options: # set children to load lazily session.query(Parent).options(lazyload(children)).all() # set children to load eagerly with a join session.query(Parent).options(joinedload(children)).all() # set children to load eagerly with a second statement session.query(Parent).options(subqueryload(children)).all() To reference a relationship that is deeper than one level, separate the names by periods: session.query(Parent).options(joinedload(foo.bar.bat)).all() When using dot-separated names with joinedload() or subqueryload(), the option applies only to the actual attribute named, and not its ancestors. For example, suppose a mapping from A to B to C, where the relationships, named atob and btoc, are both lazy-loading. A statement like the following:
161
session.query(A).options(joinedload(atob.btoc)).all() will load only A objects to start. When the atob attribute on each A is accessed, the returned B objects will eagerly load their C objects. Therefore, to modify the eager load to load both atob as well as btoc, place joinedloads for both: session.query(A).options(joinedload(atob), joinedload(atob.btoc)).all() or more simply just use joinedload_all() or subqueryload_all(): session.query(A).options(joinedload_all(atob.btoc)).all() There are two other loader strategies available, dynamic loading and no loading; these are described in Working with Large Collections.
>>> jack = session.query(User).\ ... options(joinedload(User.addresses)).\ ... filter(User.name==jack).\ ... order_by(Address.email_address).all() SELECT addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_add addresses_1.user_id AS addresses_1_user_id, users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id WHERE users.name = ? ORDER BY addresses.email_address <-- this part is wrong ! [jack] Above, ORDER BY addresses.email_address is not valid since addresses is not in the FROM list. The correct way to load the User records and order by email address is to use Query.join(): >>> ... ... ... jack = session.query(User).\ join(User.addresses).\ filter(User.name==jack).\ order_by(Address.email_address).all()
SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users JOIN addresses ON users.id = addresses.user_id WHERE users.name = ? ORDER BY addresses.email_address [jack]
162
The statement above is of course not the same as the previous one, in that the columns from addresses are not included in the result at all. We can add joinedload() back in, so that there are two joins - one is that which we are ordering on, the other is used anonymously to load the contents of the User.addresses collection:
>>> jack = session.query(User).\ ... join(User.addresses).\ ... options(joinedload(User.addresses)).\ ... filter(User.name==jack).\ ... order_by(Address.email_address).all() SELECT addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_add addresses_1.user_id AS addresses_1_user_id, users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users JOIN addresses ON users.id = addresses.user_id LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id WHERE users.name = ? ORDER BY addresses.email_address [jack] What we see above is that our usage of Query.join() is to supply JOIN clauses wed like to use in subsequent query criterion, whereas our usage of joinedload() only concerns itself with the loading of the User.addresses collection, for each User in the result. In this case, the two joins most probably appear redundant - which they are. If we wanted to use just one JOIN for collection loading as well as ordering, we use the contains_eager() option, described in Routing Explicit Joins/Statements into Eagerly Loaded Collections below. But to see why joinedload() does what it does, consider if we were ltering on a particular Address:
>>> jack = session.query(User).\ ... join(User.addresses).\ ... options(joinedload(User.addresses)).\ ... filter(User.name==jack).\ ... filter([email protected]).\ ... all() SELECT addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_add addresses_1.user_id AS addresses_1_user_id, users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users JOIN addresses ON users.id = addresses.user_id LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id WHERE users.name = ? AND addresses.email_address = ? [jack, [email protected]] Above, we can see that the two JOINs have very different roles. One will match exactly one row, that of the join of User and Address where [email protected]. The other LEFT OUTER JOIN will match all Address rows related to User, and is only used to populate the User.addresses collection, for those User objects that are returned. By changing the usage of joinedload() to another style of loading, we can change how the collection is loaded completely independently of SQL used to retrieve the actual User rows we want. Below we change joinedload() into subqueryload(): >>> jack = session.query(User).\ ... join(User.addresses).\ ... options(subqueryload(User.addresses)).\ ... filter(User.name==jack).\ ... filter([email protected]).\ ... all() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users JOIN addresses ON users.id = addresses.user_id WHERE users.name = ? AND addresses.email_address = ?
163
[jack, [email protected]] # ... subqueryload() emits a SELECT in order # to load all address records ... When using joined eager loading, if the query contains a modier that impacts the rows returned externally to the joins, such as when using DISTINCT, LIMIT, OFFSET or equivalent, the completed statement is rst wrapped inside a subquery, and the joins used specically for joined eager loading are applied to the subquery. SQLAlchemys joined eager loading goes the extra mile, and then ten miles further, to absolutely ensure that it does not affect the end result of the query, only the way collections and related objects are loaded, no matter what the format of the query is.
164
Subquery loading will issue a second load for all the child objects, so for a load of 100 objects there would be two SQL statements emitted. Theres probably not much advantage here over joined loading, however, except perhaps that subquery loading can use an INNER JOIN in all cases whereas joined loading requires that the foreign key is NOT NULL.
session.query(User).outerjoin(User.addresses).options(contains_eager(User.addresses)).all() If the eager portion of the statement is aliased, the alias keyword argument to contains_eager() may be used to indicate it. This is a string alias name or reference to an actual Alias (or other selectable) object: # use an alias of the Address entity adalias = aliased(Address) # construct a Query object which expects the "addresses" results query = session.query(User).\ outerjoin(adalias, User.addresses).\ options(contains_eager(User.addresses, alias=adalias))
# get results normally r = query.all() SELECT users.user_id AS users_user_id, users.user_name AS users_user_name, adalias.address_ adalias.user_id AS adalias_user_id, adalias.email_address AS adalias_email_address, (...oth FROM users LEFT OUTER JOIN email_addresses AS email_addresses_1 ON users.user_id = email_ad The alias argument is used only as a source of columns to match up to the result set. You can use it even to match up the result to arbitrary label names in a string SQL statement, by passing a selectable() which links those labels to the mapped Table: # label the columns of the addresses table eager_columns = select([
165
addresses.c.address_id.label(a1), addresses.c.email_address.label(a2), addresses.c.user_id.label(a3)]) # select from a raw SQL statement which uses those label names for the # addresses table. contains_eager() matches them up. query = session.query(User).\ from_statement("select users.*, addresses.address_id as a1, " "addresses.email_address as a2, addresses.user_id as a3 " "from users left outer join addresses on users.user_id=addresses.user_id").\ options(contains_eager(User.addresses, alias=eager_columns)) The path given as the argument to contains_eager() needs to be a full path from the starting entity. For example if we were loading Users->orders->Order->items->Item, the string version would look like: query(User).options(contains_eager(orders, items)) Or using the class-bound descriptor: query(User).options(contains_eager(User.orders, Order.items)) A variant on contains_eager() is the contains_alias() option, which is used in the rare case that the parent object is loaded from an alias within a user-dened SELECT statement:
# define an aliased UNION called ulist statement = users.select(users.c.user_id==7).union(users.select(users.c.user_id>7)).alias( # add on an eager load of "addresses" statement = statement.outerjoin(addresses).select().apply_labels()
# create query, indicating "ulist" is an alias for the main table, "addresses" property sho # be eager loaded query = session.query(User).options(contains_alias(ulist), contains_eager(addresses)) # results r = query.from_statement(statement)
166
contains_eager() also accepts an alias argument, which is the string name of an alias, an alias() construct, or an aliased() construct. Use this when the eagerly-loaded rows are to come from an aliased table: user_alias = aliased(User) sess.query(Order).\ join((user_alias, Order.user)).\ options(contains_eager(Order.user, alias=user_alias)) See also eagerload() for the automatic version of this functionality. For additional examples of contains_eager() see Routing Explicit Joins/Statements into Eagerly Loaded Collections. sqlalchemy.orm.eagerload(*args, **kwargs) A synonym for joinedload(). sqlalchemy.orm.eagerload_all(*args, **kwargs) A synonym for joinedload_all() sqlalchemy.orm.joinedload(*keys, **kw) Return a MapperOption that will convert the property of the given name or series of mapped attributes into an joined eager load. Note: This function is known as eagerload() in all versions of SQLAlchemy prior to version 0.6beta3, including the 0.5 and 0.4 series. eagerload() will remain available for the foreseeable future in order to enable cross-compatibility. Used with options(). examples: # joined-load the "orders" colleciton on "User" query(User).options(joinedload(User.orders)) # joined-load the "keywords" collection on each "Item", # but not the "items" collection on "Order" - those # remain lazily loaded. query(Order).options(joinedload(Order.items, Item.keywords)) # to joined-load across both, use joinedload_all() query(Order).options(joinedload_all(Order.items, Item.keywords)) joinedload() also accepts a keyword argument innerjoin=True which indicates using an inner join instead of an outer: query(Order).options(joinedload(Order.user, innerjoin=True)) Note: The join created by joinedload() is anonymously aliased such that it does not affect the query results. An Query.order_by() or Query.filter() call cannot reference these aliased tables - socalled user space joins are constructed using Query.join(). The rationale for this is that joinedload() is only applied in order to affect how related objects or collections are loaded as an optimizing detail - it can be added or removed with no impact on actual results. See the section The Zen of Eager Loading for a detailed description of how this is used, including how to use a single explicit JOIN for ltering/ordering and eager loading simultaneously. See also: subqueryload(), lazyload()
167
sqlalchemy.orm.joinedload_all(*keys, **kw) Return a MapperOption that will convert all properties along the given dot-separated path or series of mapped attributes into an joined eager load. Note: This function is known as eagerload_all() in all versions of SQLAlchemy prior to version 0.6beta3, including the 0.5 and 0.4 series. eagerload_all() will remain available for the foreseeable future in order to enable cross-compatibility. Used with options(). For example: query.options(joinedload_all(orders.items.keywords))... will set all of orders, orders.items, and orders.items.keywords to load in one joined eager load. Individual descriptors are accepted as arguments as well: query.options(joinedload_all(User.orders, Order.items, Item.keywords)) The keyword arguments accept a ag innerjoin=True|False which will override the value of the innerjoin ag specied on the relationship(). See also: subqueryload_all(), lazyload() sqlalchemy.orm.lazyload(*keys) Return a MapperOption that will convert the property of the given name or series of mapped attributes into a lazy load. Used with options(). See also: eagerload(), subqueryload(), immediateload() sqlalchemy.orm.subqueryload(*keys) Return a MapperOption that will convert the property of the given name or series of mapped attributes into an subquery eager load. Used with options(). examples: # subquery-load the "orders" colleciton on "User" query(User).options(subqueryload(User.orders)) # subquery-load the "keywords" collection on each "Item", # but not the "items" collection on "Order" - those # remain lazily loaded. query(Order).options(subqueryload(Order.items, Item.keywords)) # to subquery-load across both, use subqueryload_all() query(Order).options(subqueryload_all(Order.items, Item.keywords)) See also: joinedload(), lazyload() sqlalchemy.orm.subqueryload_all(*keys) Return a MapperOption that will convert all properties along the given dot-separated path or series of mapped attributes into a subquery eager load. Used with options(). For example:
168
query.options(subqueryload_all(orders.items.keywords))... will set all of orders, orders.items, and orders.items.keywords to load in one subquery eager load. Individual descriptors are accepted as arguments as well: query.options(subqueryload_all(User.orders, Order.items, Item.keywords)) See also: joinedload_all(), lazyload(), immediateload()
169
Note that active_history can also be set directly via column_property() and relationship(). propagate=False When True, the listener function will be established not just for the class attribute given, but for attributes of the same name on all current subclasses of that class, as well as all future subclasses of that class, using an additional listener that listens for instrumentation events. raw=False When True, the target argument to the event will be the InstanceState management object, rather than the mapped instance itself. retval=False when True, the user-dened event listening must return the value argument from the function. This gives the listening function the opportunity to change the value that is ultimately used for a set or append event. append(target, value, initiator) Receive a collection append event. Parameters target the object instance receiving the event. If the listener is registered with raw=True, this will be the InstanceState object. value the value being appended. If this listener is registered with retval=True, the listener function must return this value, or a new value which replaces it. initiator the attribute implementation object which initiated this event. Returns if the event was registered with retval=True, the given value, or a new effective value, should be returned. remove(target, value, initiator) Receive a collection remove event. Parameters target the object instance receiving the event. If the listener is registered with raw=True, this will be the InstanceState object. value the value being removed. initiator the attribute implementation object which initiated this event. Returns No return value is dened for this event. set(target, value, oldvalue, initiator) Receive a scalar set event. Parameters target the object instance receiving the event. If the listener is registered with raw=True, this will be the InstanceState object. value the value being set. If this listener is registered with retval=True, the listener function must return this value, or a new value which replaces it. oldvalue the previous value being replaced. This may also be the symbol NEVER_SET or NO_VALUE. If the listener is registered with active_history=True, the previous value of the attribute will be loaded from the database if the existing value is currently unloaded or expired. initiator the attribute implementation object which initiated this event. Returns if the event was registered with retval=True, the given value, or a new effective value, should be returned.
170
171
sqlalchemy.orm.interfaces.EXT_STOP - cancel all subsequent event handlers in the chain. other values - the return value specied by specic listeners, translate_row() or create_instance(). after_configured() Called after a series of mappers have been congured. This corresponds to the orm.configure_mappers() call, which note is usually called automatically as mappings are rst used. Theoretically this event is called once per application, but is actually called any time new mappers have been affected by a orm.configure_mappers() call. If new mappings are constructed after existing ones have already been used, this event can be called again. after_delete(mapper, connection, target) Receive an object instance after a DELETE statement has been emitted corresponding to that instance. This event is used to emit additional SQL statements on the given connection as well as to perform application specic bookkeeping related to a deletion event. The event is often called for a batch of objects of the same class after their DELETE statements have been emitted at once in a previous step. Handlers should not alter mapped attributes on the objects just ushed or on other objects of the same class, nor should any other ORM-based operation such as Session.add take place here. Attribute changes on objects that were already ushed will be discarded, and changes to the ush plan will also not take place. Use SessionEvents.before_flush() to change the ush plan on ush. Parameters mapper the Mapper which is the target of this event. connection the Connection being used to emit DELETE statements for this instance. This provides a handle into the current transaction on the target database specic to this instance. target the mapped instance being deleted. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. Returns No return value is supported by this event. after_insert(mapper, connection, target) Receive an object instance after an INSERT statement is emitted corresponding to that instance. This event is used to modify in-Python-only state on the instance after an INSERT occurs, as well as to emit additional SQL statements on the given connection. The event is often called for a batch of objects of the same class after their INSERT statements have been emitted at once in a previous step. In the extremely rare case that this is not desirable, the mapper() can be congured with batch=False, which will cause batches of instances to be broken up into individual (and more poorly performing) event->persist->event steps. Handlers should not alter mapped attributes on the objects just ushed or on other objects of the same class, nor should any other ORM-based operation such as Session.add take place here. Attribute changes on objects that were already ushed will be discarded, and changes to the ush plan will also not take place. Use SessionEvents.before_flush() to change the ush plan on ush. Parameters mapper the Mapper which is the target of this event. such as
172
connection the Connection being used to emit INSERT statements for this instance. This provides a handle into the current transaction on the target database specic to this instance. target the mapped instance being persisted. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. Returns No return value is supported by this event. after_update(mapper, connection, target) Receive an object instance after an UPDATE statement is emitted corresponding to that instance. This event is used to modify in-Python-only state on the instance after an UPDATE occurs, as well as to emit additional SQL statements on the given connection. This method is called for all instances that are marked as dirty, even those which have no net changes to their column-based attributes, and for which no UPDATE statement has proceeded. An object is marked as dirty when any of its column-based attributes have a set attribute operation called or when any of its collections are modied. If, at update time, no column-based attributes have any net changes, no UPDATE statement will be issued. This means that an instance being sent to after_update() is not a guarantee that an UPDATE statement has been issued. To detect if the column-based attributes on the object have net changes, and therefore resulted in an UPDATE statement, use object_session(instance).is_modified(instance, include_collections=False). The event is often called for a batch of objects of the same class after their UPDATE statements have been emitted at once in a previous step. In the extremely rare case that this is not desirable, the mapper() can be congured with batch=False, which will cause batches of instances to be broken up into individual (and more poorly performing) event->persist->event steps. Handlers should not alter mapped attributes on the objects just ushed or on other objects of the same class, nor should any other ORM-based operation such as Session.add take place here. Attribute changes on objects that were already ushed will be discarded, and changes to the ush plan will also not take place. Use SessionEvents.before_flush() to change the ush plan on ush. Parameters mapper the Mapper which is the target of this event. connection the Connection being used to emit UPDATE statements for this instance. This provides a handle into the current transaction on the target database specic to this instance. target the mapped instance being persisted. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. Returns No return value is supported by this event. append_result(mapper, context, row, target, result, **ags) Receive an object instance before that instance is appended to a result list. This is a rarely used hook which can be used to alter the construction of a result list returned by Query. Parameters mapper the Mapper which is the target of this event. context the QueryContext, which includes a handle to the current Query in progress as well as additional state information.
173
row the result row being handled. This may be an actual RowProxy or may be a dictionary containing Column objects as keys. target the mapped instance being populated. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. result a list-like object where results are being appended. **ags Additional state information about the current handling of the row. Returns If this method is registered with retval=True, a return value of EXT_STOP will prevent the instance from being appended to the given result list, whereas a return value of EXT_CONTINUE will result in the default behavior of appending the value to the result list. before_delete(mapper, connection, target) Receive an object instance before a DELETE statement is emitted corresponding to that instance. This event is used to emit additional SQL statements on the given connection as well as to perform application specic bookkeeping related to a deletion event. The event is often called for a batch of objects of the same class before their DELETE statements are emitted at once in a later step. Handlers should not modify any attributes which are mapped by relationship(), nor should they attempt to make any modications to the Session in this hook (including Session.add(), Session.delete(), etc.) - such changes will not take effect. For overall changes to the ush plan, use SessionEvents.before_flush(). Parameters mapper the Mapper which is the target of this event. connection the Connection being used to emit DELETE statements for this instance. This provides a handle into the current transaction on the target database specic to this instance. target the mapped instance being deleted. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. Returns No return value is supported by this event. before_insert(mapper, connection, target) Receive an object instance before an INSERT statement is emitted corresponding to that instance. This event is used to modify local, non-object related attributes on the instance before an INSERT occurs, as well as to emit additional SQL statements on the given connection. The event is often called for a batch of objects of the same class before their INSERT statements are emitted at once in a later step. In the extremely rare case that this is not desirable, the mapper() can be congured with batch=False, which will cause batches of instances to be broken up into individual (and more poorly performing) event->persist->event steps. Handlers should not modify any attributes which are mapped by relationship(), nor should they attempt to make any modications to the Session in this hook (including Session.add(), Session.delete(), etc.) - such changes will not take effect. For overall changes to the ush plan, use SessionEvents.before_flush(). Parameters mapper the Mapper which is the target of this event.
174
connection the Connection being used to emit INSERT statements for this instance. This provides a handle into the current transaction on the target database specic to this instance. target the mapped instance being persisted. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. Returns No return value is supported by this event. before_update(mapper, connection, target) Receive an object instance before an UPDATE statement is emitted corresponding to that instance. This event is used to modify local, non-object related attributes on the instance before an UPDATE occurs, as well as to emit additional SQL statements on the given connection. This method is called for all instances that are marked as dirty, even those which have no net changes to their column-based attributes. An object is marked as dirty when any of its column-based attributes have a set attribute operation called or when any of its collections are modied. If, at update time, no column-based attributes have any net changes, no UPDATE statement will be issued. This means that an instance being sent to before_update() is not a guarantee that an UPDATE statement will be issued, although you can affect the outcome here by modifying attributes so that a net change in value does exist. To detect if the column-based attributes on the object have net changes, and will therefore generate an UPDATE statement, use object_session(instance).is_modified(instance, include_collections=False). The event is often called for a batch of objects of the same class before their UPDATE statements are emitted at once in a later step. In the extremely rare case that this is not desirable, the mapper() can be congured with batch=False, which will cause batches of instances to be broken up into individual (and more poorly performing) event->persist->event steps. Handlers should not modify any attributes which are mapped by relationship(), nor should they attempt to make any modications to the Session in this hook (including Session.add(), Session.delete(), etc.) - such changes will not take effect. For overall changes to the ush plan, use SessionEvents.before_flush(). Parameters mapper the Mapper which is the target of this event. connection the Connection being used to emit UPDATE statements for this instance. This provides a handle into the current transaction on the target database specic to this instance. target the mapped instance being persisted. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. Returns No return value is supported by this event. create_instance(mapper, context, row, class_) Receive a row when a new object instance is about to be created from that row. The method can choose to create the instance itself, or it can return EXT_CONTINUE to indicate normal object creation should take place. This listener is typically registered with retval=True. Parameters mapper the Mapper which is the target of this event. context the QueryContext, which includes a handle to the current Query in progress as well as additional state information.
175
row the result row being handled. This may be an actual RowProxy or may be a dictionary containing Column objects as keys. class_ the mapped class. Returns When congured with retval=True, the return value should be a newly created instance of the mapped class, or EXT_CONTINUE indicating that default object construction should take place. instrument_class(mapper, class_) Receive a class when the mapper is rst constructed, before instrumentation is applied to the mapped class. This event is the earliest phase of mapper construction. Most attributes of the mapper are not yet initialized. This listener can generally only be applied to the Mapper class overall. Parameters mapper the Mapper which is the target of this event. class_ the mapped class. mapper_configured(mapper, class_) Called when the mapper for the class is fully congured. This event is the latest phase of mapper construction. The mapper should be in its nal state. Parameters mapper the Mapper which is the target of this event. class_ the mapped class. populate_instance(mapper, context, row, target, **ags) Receive an instance before that instance has its attributes populated. This usually corresponds to a newly loaded instance but may also correspond to an already-loaded instance which has unloaded attributes to be populated. The method may be called many times for a single instance, as multiple result rows are used to populate eagerly loaded collections. Most usages of this hook are obsolete. For a generic object has been newly created from a row hook, use InstanceEvents.load(). Parameters mapper the Mapper which is the target of this event. context the QueryContext, which includes a handle to the current Query in progress as well as additional state information. row the result row being handled. This may be an actual RowProxy or may be a dictionary containing Column objects as keys. target the mapped instance. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. Returns When congured with retval=True, a return value of EXT_STOP will bypass instance population by the mapper. A value of EXT_CONTINUE indicates that default instance population should take place. translate_row(mapper, context, row) Perform pre-processing on the given result row and return a new row instance. This listener is typically registered with retval=True. It is called when the mapper rst receives a row, before the object identity or the instance itself has been derived from that row. The given row may or may not be a RowProxy object - it will always be a dictionary-like object which contains mapped columns
176
as keys. The returned object should also be a dictionary-like object which recognizes mapped columns as keys. Parameters mapper the Mapper which is the target of this event. context the QueryContext, which includes a handle to the current Query in progress as well as additional state information. row the result row being handled. This may be an actual RowProxy or may be a dictionary containing Column objects as keys. Returns When congured with retval=True, the function should return a dictionary-like row object, or EXT_CONTINUE, indicating the original row should be used.
177
keys is a list of attribute names. If None, the entire state was expired. Parameters target the mapped instance. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. attrs iterable collection of attribute names which were expired, or None if all attributes were expired. first_init(manager, cls) Called when the rst instance of a particular mapping is called. init(target, args, kwargs) Receive an instance when its constructor is called. This method is only called during a userland construction of an object. It is not called when an object is loaded from the database. init_failure(target, args, kwargs) Receive an instance when its constructor has been called, and raised an exception. This method is only called during a userland construction of an object. It is not called when an object is loaded from the database. load(target, context) Receive an object instance after it has been created via __new__, and after initial attribute population has occurred. This typically occurs when the instance is created based on incoming result rows, and is only called once for that instances lifetime. Note that during a result-row load, this method is called upon the rst row received for this instance. Note that some attributes and collections may or may not be loaded or even initialized, depending on whats present in the result rows. Parameters target the mapped instance. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. context the QueryContext corresponding to the current Query in progress. This argument may be None if the load does not correspond to a Query, such as during Session.merge(). pickle(target, state_dict) Receive an object instance when its associated state is being pickled. Parameters target the mapped instance. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. state_dict the dictionary returned by InstanceState.__getstate__, containing the state to be pickled. refresh(target, context, attrs) Receive an object instance after one or more attributes have been refreshed from a query. Parameters target the mapped instance. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. context the QueryContext corresponding to the current Query in progress.
178
attrs iterable collection of attribute names which were populated, or None if all columnmapped, non-deferred attributes were populated. resurrect(target) Receive an object instance as it is resurrected from garbage collection, which occurs when a dirty state falls out of scope. Parameters target the mapped instance. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. unpickle(target, state_dict) Receive an object instance after its associated state has been unpickled. Parameters target the mapped instance. If the event is congured with raw=True, this will instead be the InstanceState state-management object associated with the instance. state_dict the dictionary sent to InstanceState.__setstate__, containing the state dictionary which was pickled.
179
after_bulk_delete(session, query, query_context, result) Execute after a bulk delete operation to the session. This is called after a session.query(...).delete() query is the query object that this delete operation was called on. query_context was the query context object. result is the result object returned from the bulk operation. after_bulk_update(session, query, query_context, result) Execute after a bulk update operation to the session. This is called after a session.query(...).update() query is the query object that this update operation was called on. query_context was the query context object. result is the result object returned from the bulk operation. after_commit(session) Execute after a commit has occurred. Note that this may not be per-ush if a longer running transaction is ongoing. Parameters session The target Session. after_flush(session, ush_context) Execute after ush has completed, but before commit has been called. Note that the sessions state is still in pre-ush, i.e. new, dirty, and deleted lists still show pre-ush state as well as the history settings on instance attributes. Parameters session The target Session. ush_context Internal UOWTransaction object which handles the details of the ush. after_flush_postexec(session, ush_context) Execute after ush has completed, and after the post-exec state occurs. This will be when the new, dirty, and deleted lists are in their nal state. An actual commit() may or may not have occurred, depending on whether or not the ush started its own transaction or participated in a larger transaction. Parameters session The target Session. ush_context Internal UOWTransaction object which handles the details of the ush. after_rollback(session) Execute after a real DBAPI rollback has occurred. Note that this event only res when the actual rollback against the database occurs - it does not re each time the Session.rollback() method is called, if the underlying DBAPI transaction has already been rolled back. In many cases, the Session will not be in an active state during this event, as the current transaction is not valid. To acquire a Session which is active after the outermost rollback has proceeded, use the SessionEvents.after_soft_rollback() event, checking the Session.is_active ag. Parameters session The target Session. after_soft_rollback(session, previous_transaction) Execute after any rollback has occurred, including soft rollbacks that dont actually emit at the DBAPI level.
180
This corresponds to both nested and outer rollbacks, i.e. the innermost rollback that calls the DBAPIs rollback() method, as well as the enclosing rollback calls that only pop themselves from the transaction stack. The given Session can be used to invoke SQL and Session.query() operations after an outermost rollback by rst checking the Session.is_active ag: @event.listens_for(Session, "after_soft_rollback") def do_something(session, previous_transaction): if session.is_active: session.execute("select * from some_table") Parameters session The target Session. previous_transaction The SessionTransaction transactional marker object which was just closed. The current SessionTransaction for the given Session is available via the Session.transaction attribute. New in 0.7.3. before_commit(session) Execute before commit is called. Note that this may not be per-ush if a longer running transaction is ongoing. Parameters session The target Session. before_flush(session, ush_context, instances) Execute before ush process has started. instances is an optional list of objects which were passed to the flush() method. Parameters session The target Session. ush_context Internal UOWTransaction object which handles the details of the ush. instances Usually None, this is the collection of objects which can be passed to the Session.flush() method (note this usage is deprecated).
181
class_uninstrument(cls) Called before the given class is uninstrumented. To get at the ClassManager, use manager_of_class().
182
183
The association_proxy is applied to the User class to produce a view of the kw relationship, which only exposes the string value of .keyword associated with each Keyword object: from sqlalchemy.ext.associationproxy import association_proxy class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String(64)) kw = relationship("Keyword", secondary=lambda: userkeywords_table) def __init__(self, name): self.name = name # proxy the keyword attribute from the kw relationship keywords = association_proxy(kw, keyword) We can now reference the .keywords collection as a listing of strings, which is both readable and writable. New Keyword objects are created for us transparently: >>> user = User(jek) >>> user.keywords.append(cheese inspector) >>> user.keywords [cheese inspector] >>> user.keywords.append(snack ninja) >>> user.kw [<__main__.Keyword object at 0x12cdd30>, <__main__.Keyword object at 0x12cde30>] The AssociationProxy object produced by the association_proxy() function is an instance of a Python descriptor. It is always declared with the user-dened class being mapped, regardless of whether Declarative or classical mappings via the mapper() function are used. The proxy functions by operating upon the underlying mapped attribute or collection in response to operations, and changes made via the proxy are immediately apparent in the mapped attribute, as well as vice versa. The underlying attribute remains fully accessible. When rst accessed, the association proxy performs introspection operations on the target collection so that its behavior corresponds correctly. Details such as if the locally proxied attribute is a collection (as is typical) or a scalar reference, as well as if the collection acts like a set, list, or dictionary is taken into account, so that the proxy should act just like the underlying collection or attribute does. Creation of New Values When a list append() event (or set add(), dictionary __setitem__(), or scalar assignment event) is intercepted by the association proxy, it instantiates a new instance of the intermediary object using its constructor, passing as a single argument the given value. In our example above, an operation like: user.keywords.append(cheese inspector) Is translated by the association proxy into the operation: user.kw.append(Keyword(cheese inspector)) The example works here because we have designed the constructor for Keyword to accept a single positional argument, keyword. For those cases where a single-argument constructor isnt feasible, the association proxys creational behavior can be customized using the creator argument, which references a callable (i.e. Python function) that will produce a new object instance given the singular argument. Below we illustrate this using a lambda as is typical:
184
class User(Base): # ... # use Keyword(keyword=kw) on append() events keywords = association_proxy(kw, keyword, creator=lambda kw: Keyword(keyword=kw)) The creator function accepts a single argument in the case of a list- or set- based collection, or a scalar attribute. In the case of a dictionary-based collection, it accepts two arguments, key and value. An example of this is below in Proxying to Dictionary Based Collections. Simplifying Association Objects The association object pattern is an extended form of a many-to-many relationship, and is described at Association Object. Association proxies are useful for keeping association objects out the way during regular use. Suppose our userkeywords table above had additional columns which wed like to map explicitly, but in most cases we dont require direct access to these attributes. Below, we illustrate a new mapping which introduces the UserKeyword class, which is mapped to the userkeywords table illustrated earlier. This class adds an additional column special_key, a value which we occasionally want to access, but not in the usual case. We create an association proxy on the User class called keywords, which will bridge the gap from the user_keywords collection of User to the .keyword attribute present on each UserKeyword: from sqlalchemy import Column, Integer, String, ForeignKey from sqlalchemy.orm import relationship, backref from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String(64)) # association proxy of "user_keywords" collection # to "keyword" attribute keywords = association_proxy(user_keywords, keyword) def __init__(self, name): self.name = name class UserKeyword(Base): __tablename__ = user_keyword user_id = Column(Integer, ForeignKey(user.id), primary_key=True) keyword_id = Column(Integer, ForeignKey(keyword.id), primary_key=True) special_key = Column(String(50)) # bidirectional attribute/collection of "user"/"user_keywords" user = relationship(User, backref=backref("user_keywords", cascade="all, delete-orphan") )
185
# reference to the "Keyword" object keyword = relationship("Keyword") def __init__(self, keyword=None, user=None, special_key=None): self.user = user self.keyword = keyword self.special_key = special_key class Keyword(Base): __tablename__ = keyword id = Column(Integer, primary_key=True) keyword = Column(keyword, String(64)) def __init__(self, keyword): self.keyword = keyword def __repr__(self): return Keyword(%s) % repr(self.keyword) With the above conguration, we can operate upon the .keywords collection of each User object, and the usage of UserKeyword is concealed: >>> user = User(log) >>> for kw in (Keyword(new_from_blammo), Keyword(its_big)): ... user.keywords.append(kw) ... >>> print(user.keywords) [Keyword(new_from_blammo), Keyword(its_big)] Where above, each .keywords.append() operation is equivalent to: >>> user.user_keywords.append(UserKeyword(Keyword(its_heavy))) The UserKeyword association object has two attributes here which are populated; the .keyword attribute is populated directly as a result of passing the Keyword object as the rst argument. The .user argument is then assigned as the UserKeyword object is appended to the User.user_keywords collection, where the bidirectional relationship congured between User.user_keywords and UserKeyword.user results in a population of the UserKeyword.user attribute. The special_key argument above is left at its default value of None. For those cases where we do want special_key to have a value, we create the UserKeyword object explicitly. Below we assign all three attributes, where the assignment of .user has the effect of the UserKeyword being appended to the User.user_keywords collection: >>> UserKeyword(Keyword(its_wood), user, special_key=my special key) The association proxy returns to us a collection of Keyword objects represented by all these operations:
>>> user.keywords [Keyword(new_from_blammo), Keyword(its_big), Keyword(its_heavy), Keyword(its_wood)] Proxying to Dictionary Based Collections The association proxy can proxy to dictionary based collections as well. SQLAlchemy mappings usually use the attribute_mapped_collection() collection type to create dictionary collections, as well as the extended techniques described in Custom Dictionary-Based Collections. The association proxy adjusts its behavior when it detects the usage of a dictionary-based collection. When new values are added to the dictionary, the association proxy instantiates the intermediary object by passing two arguments to the
186
creation function instead of one, the key and the value. As always, this creation function defaults to the constructor of the intermediary class, and can be customized using the creator argument. Below, we modify our UserKeyword example such that the User.user_keywords collection will now be mapped using a dictionary, where the UserKeyword.special_key argument will be used as the key for the dictionary. We then apply a creator argument to the User.keywords proxy so that these values are assigned appropriately when new elements are added to the dictionary: from from from from from sqlalchemy import Column, Integer, String, ForeignKey sqlalchemy.orm import relationship, backref sqlalchemy.ext.associationproxy import association_proxy sqlalchemy.ext.declarative import declarative_base sqlalchemy.orm.collections import attribute_mapped_collection
Base = declarative_base() class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String(64)) # proxy to user_keywords, instantiating UserKeyword # assigning the new key to special_key, values to # keyword. keywords = association_proxy(user_keywords, keyword, creator=lambda k, v: UserKeyword(special_key=k, keyword=v) ) def __init__(self, name): self.name = name class UserKeyword(Base): __tablename__ = user_keyword user_id = Column(Integer, ForeignKey(user.id), primary_key=True) keyword_id = Column(Integer, ForeignKey(keyword.id), primary_key=True) special_key = Column(String) # bidirectional user/user_keywords relationships, mapping # user_keywords with a dictionary against "special_key" as key. user = relationship(User, backref=backref( "user_keywords", collection_class=attribute_mapped_collection("special_key"), cascade="all, delete-orphan" ) ) keyword = relationship("Keyword") class Keyword(Base): __tablename__ = keyword id = Column(Integer, primary_key=True) keyword = Column(keyword, String(64)) def __init__(self, keyword): self.keyword = keyword
187
def __repr__(self): return Keyword(%s) % repr(self.keyword) We illustrate the .keywords collection as a dictionary, mapping the UserKeyword.string_key value to Keyword objects: >>> user = User(log) >>> user.keywords[sk1] = Keyword(kw1) >>> user.keywords[sk2] = Keyword(kw2) >>> print(user.keywords) {sk1: Keyword(kw1), sk2: Keyword(kw2)} Composite Association Proxies Given our previous examples of proxying from relationship to scalar attribute, proxying across an association object, and proxying dictionaries, we can combine all three techniques together to give User a keywords dictionary that deals strictly with the string value of special_key mapped to the string keyword. Both the UserKeyword and Keyword classes are entirely concealed. This is achieved by building an association proxy on User that refers to an association proxy present on UserKeyword: from sqlalchemy import Column, Integer, String, ForeignKey from sqlalchemy.orm import relationship, backref from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm.collections import attribute_mapped_collection Base = declarative_base() class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String(64)) # the same user_keywords->keyword proxy as in # the basic dictionary example keywords = association_proxy( user_keywords, keyword, creator=lambda k, v: UserKeyword(special_key=k, keyword=v) ) def __init__(self, name): self.name = name class UserKeyword(Base): __tablename__ = user_keyword user_id = Column(Integer, ForeignKey(user.id), primary_key=True) keyword_id = Column(Integer, ForeignKey(keyword.id), primary_key=True) special_key = Column(String) 188 Chapter 2. SQLAlchemy ORM
user = relationship(User, backref=backref( "user_keywords", collection_class=attribute_mapped_collection("special_key"), cascade="all, delete-orphan" ) ) # the relationship to Keyword is now called # kw kw = relationship("Keyword") # keyword is changed to be a proxy to the # keyword attribute of Keyword keyword = association_proxy(kw, keyword) class Keyword(Base): __tablename__ = keyword id = Column(Integer, primary_key=True) keyword = Column(keyword, String(64)) def __init__(self, keyword): self.keyword = keyword User.keywords is now a dictionary of string to string, where UserKeyword and Keyword objects are created and removed for us transparently using the association proxy. In the example below, we illustrate usage of the assignment operator, also appropriately handled by the association proxy, to apply a dictionary value to the collection at once: >>> user = User(log) >>> user.keywords = { ... sk1:kw1, ... sk2:kw2 ... } >>> print(user.keywords) {sk1: kw1, sk2: kw2} >>> user.keywords[sk3] = kw3 >>> del user.keywords[sk2] >>> print(user.keywords) {sk1: kw1, sk3: kw3} >>> # illustrate un-proxied usage ... print(user.user_keywords[sk3].kw) <__main__.Keyword object at 0x12ceb90> One caveat with our example above is that because Keyword objects are created for each dictionary set operation, the example fails to maintain uniqueness for the Keyword objects on their string name, which is a typical requirement for a tagging scenario such as this one. For this use case the recipe UniqueObject, or a comparable creational strategy, is recommended, which will apply a lookup rst, then create strategy to the constructor of the Keyword class, so that an already existing Keyword is returned if the given name is already present. Querying with Association Proxies The AssociationProxy features simple SQL construction capabilities which relate down to the underlying relationship() in use as well as the target attribute. For example, the 2.10. ORM Extensions 189
RelationshipProperty.Comparator.any() and RelationshipProperty.Comparator.has() operations are available, and will produce a nested EXISTS clause, such as in our basic association object example: >>> print(session.query(User).filter(User.keywords.any(keyword=jek))) SELECT user.id AS user_id, user.name AS user_name FROM user WHERE EXISTS (SELECT 1 FROM user_keyword WHERE user.id = user_keyword.user_id AND (EXISTS (SELECT 1 FROM keyword WHERE keyword.id = user_keyword.keyword_id AND keyword.keyword = :keyword_1))) For a proxy to a scalar attribute, __eq__() is supported: >>> print(session.query(UserKeyword).filter(UserKeyword.keyword == jek)) SELECT user_keyword.* FROM user_keyword WHERE EXISTS (SELECT 1 FROM keyword WHERE keyword.id = user_keyword.keyword_id AND keyword.keyword = :keyword_1) and .contains() is available for a proxy to a scalar collection: >>> print(session.query(User).filter(User.keywords.contains(jek))) SELECT user.* FROM user WHERE EXISTS (SELECT 1 FROM userkeywords, keyword WHERE user.id = userkeywords.user_id AND keyword.id = userkeywords.keyword_id AND keyword.keyword = :keyword_1) AssociationProxy can be used with Query.join() somewhat manually using the attr attribute in a star-args context (new in 0.7.3): q = session.query(User).join(*User.keywords) attr is composed of AssociationProxy.local_attr and AssociationProxy.remote_attr, which are just synonyms for the actual proxied attributes, and can also be used for querying (also new in 0.7.3): uka = aliased(UserKeyword) ka = aliased(Keyword) q = session.query(User).\ join(uka, User.keywords.local_attr).\ join(ka, User.keywords.remote_attr) API Documentation sqlalchemy.ext.associationproxy.association_proxy(target_collection, attr, **kw) Return a Python property implementing a view of a target attribute which references an attribute on members of the target. The returned value is an instance of AssociationProxy. Implements a Python property representing a relationship as a collection of simpler values, or a scalar value. The proxied property will mimic the collection type of the target (list, dict or set), or, in the case of a one to one relationship, a simple scalar value. Parameters
190
target_collection Name of the attribute well proxy to. This attribute is typically mapped by relationship() to link to a target collection, but can also be a many-to-one or nonscalar relationship. attr Attribute on the associated instance or instances well proxy for. For example, given a target collection of [obj1, obj2], a list created by this proxy property would look like [getattr(obj1, attr), getattr(obj2, attr)] If the relationship is one-to-one or otherwise uselist=False, then simply: getattr(obj, attr) creator optional. When new items are added to this proxied collection, new instances of the class collected by the target collection will be created. For list and set collections, the target class constructor will be called with the value for the new instance. For dict types, two arguments are passed: key and value. If you want to construct instances differently, supply a creator function that takes arguments as above and returns instances. For scalar relationships, creator() will be called if the target is None. If the target is present, set operations are proxied to setattr() on the associated object. If you have an associated object with multiple attributes, you may set up multiple association proxies mapping to different attributes. See the unit tests for examples, and for examples of how creator() functions can be used to construct the scalar relationship on-demand in this situation. **kw Passes along any other keyword arguments to AssociationProxy. class sqlalchemy.ext.associationproxy.AssociationProxy(target_collection, creator=None, set_factory=None, proxy_factory=None, proxy_bulk_set=None) A descriptor that presents a read/write view of an object attribute. __init__(target_collection, attr, creator=None, proxy_bulk_set=None) Construct a new AssociationProxy. getset_factory=None, attr, get-
proxy_factory=None,
The association_proxy() function is provided as the usual entrypoint here, AssociationProxy can be instantiated and/or subclassed directly. Parameters
though
target_collection Name of the collection well proxy to, usually created with relationship(). attr Attribute on the collected instances well proxy for. For example, given a target collection of [obj1, obj2], a list created by this proxy property would look like [getattr(obj1, attr), getattr(obj2, attr)] creator Optional. When new items are added to this proxied collection, new instances of the class collected by the target collection will be created. For list and set collections, the target class constructor will be called with the value for the new instance. For dict types, two arguments are passed: key and value. If you want to construct instances differently, supply a creator function that takes arguments as above and returns instances.
191
getset_factory Optional. Proxied attribute access is automatically handled by routines that get and set values based on the attr argument for this proxy. If you would like to customize this behavior, you may supply a getset_factory callable that produces a tuple of getter and setter functions. The factory is called with two arguments, the abstract type of the underlying collection and this proxy instance. proxy_factory Optional. The type of collection to emulate is determined by snifng the target collection. If your collection type cant be determined by duck typing or youd like to use a different collection implementation, you may supply a factory function to produce those collections. Only applicable to non-scalar relationships. proxy_bulk_set Optional, use with proxy_factory. See the _set() method for details. any(criterion=None, **kwargs) Produce a proxied any expression using EXISTS. This expression will be a composed product using the RelationshipProperty.Comparator.any() and/or RelationshipProperty.Comparator.has() operators of the underlying proxied attributes. attr Return a tuple of (local_attr, remote_attr). This attribute is convenient when specifying a join using Query.join() across two relationships: sess.query(Parent).join(*Parent.proxied.attr) New in 0.7.3. See also: AssociationProxy.local_attr AssociationProxy.remote_attr contains(obj) Produce a proxied contains expression using EXISTS.
This expression will be a composed product using the RelationshipProperty.Comparator.any() , RelationshipProperty.Comparator.has(), and/or RelationshipProperty.Comparator.contains operators of the underlying proxied attributes. has(criterion=None, **kwargs) Produce a proxied has expression using EXISTS. This expression will be a composed product using the RelationshipProperty.Comparator.any() and/or RelationshipProperty.Comparator.has() operators of the underlying proxied attributes. local_attr The local MapperProperty referenced by this AssociationProxy. New in 0.7.3. See also: AssociationProxy.attr AssociationProxy.remote_attr remote_attr The remote MapperProperty referenced by this AssociationProxy.
192
New in 0.7.3. See also: AssociationProxy.attr AssociationProxy.local_attr scalar Return True if this AssociationProxy proxies a scalar relationship on the local side. target_class The intermediary class handled by this AssociationProxy. Intercepted append/set/assignment events will result in the generation of new instances of this class.
2.10.2 Declarative
Synopsis SQLAlchemy object-relational conguration involves the combination of Table, mapper(), and class objects to dene a mapped class. declarative allows all three to be expressed at once within the class declaration. As much as possible, regular SQLAlchemy schema and ORM constructs are used directly, so that conguration between classical ORM usage and declarative remain highly similar. As a simple example: from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class SomeClass(Base): __tablename__ = some_table id = Column(Integer, primary_key=True) name = Column(String(50)) Above, the declarative_base() callable returns a new base class from which all mapped classes should inherit. When the class denition is completed, a new Table and mapper() will have been generated. The resulting table and mapper are accessible via __table__ and __mapper__ attributes on the SomeClass class: # access the mapped Table SomeClass.__table__ # access the Mapper SomeClass.__mapper__ Dening Attributes In the previous example, the Column objects are automatically named with the name of the attribute to which they are assigned. To name columns explicitly with a name distinct from their mapped attribute, just give the column a name. Below, column some_table_id is mapped to the id attribute of SomeClass, but in SQL will be represented as some_table_id: class SomeClass(Base): __tablename__ = some_table id = Column("some_table_id", Integer, primary_key=True) 2.10. ORM Extensions 193
Attributes may be added to the class after its construction, and they will be added to the underlying Table and mapper() denitions as appropriate: SomeClass.data = Column(data, Unicode) SomeClass.related = relationship(RelatedInfo) Classes which are constructed using declarative can interact freely with classes that are mapped explicitly with mapper(). It is recommended, though not required, that all tables share the same underlying MetaData object, so that stringcongured ForeignKey references can be resolved without issue. Accessing the MetaData The declarative_base() base class contains a MetaData object where newly dened Table objects are collected. This object is intended to be accessed directly for MetaData-specic operations. Such as, to issue CREATE statements for all tables: engine = create_engine(sqlite://) Base.metadata.create_all(engine) The usual techniques of associating MetaData: with Engine apply, such as assigning to the bind attribute: Base.metadata.bind = create_engine(sqlite://) To associate the engine with the declarative_base() at time of construction, the bind argument is accepted: Base = declarative_base(bind=create_engine(sqlite://)) declarative_base() can also receive a pre-existing MetaData object, which allows a declarative setup to be associated with an already existing traditional collection of Table objects: mymetadata = MetaData() Base = declarative_base(metadata=mymetadata) Conguring Relationships Relationships to other classes are done in the usual way, with the added feature that the class specied to relationship() may be a string name. The class registry associated with Base is used at mapper compilation time to resolve the name into the actual class object, which is expected to have been dened once the mapper conguration is used: class User(Base): __tablename__ = users id = Column(Integer, primary_key=True) name = Column(String(50)) addresses = relationship("Address", backref="user") class Address(Base): __tablename__ = addresses id = Column(Integer, primary_key=True) email = Column(String(50)) user_id = Column(Integer, ForeignKey(users.id)) Column constructs, since they are just that, are immediately usable, as below where we dene a primary join condition on the Address class using them:
194
class Address(Base): __tablename__ = addresses id = Column(Integer, primary_key=True) email = Column(String(50)) user_id = Column(Integer, ForeignKey(users.id)) user = relationship(User, primaryjoin=user_id == User.id) In addition to the main argument for relationship(), other arguments which depend upon the columns present on an as-yet undened class may also be specied as strings. These strings are evaluated as Python expressions. The full namespace available within this evaluation includes all classes mapped for this declarative base, as well as the contents of the sqlalchemy package, including expression functions like desc() and func: class User(Base): # .... addresses = relationship("Address", order_by="desc(Address.email)", primaryjoin="Address.user_id==User.id") As an alternative to string-based attributes, attributes may also be dened after all classes have been created. Just add them to the target class after the fact: User.addresses = relationship(Address, primaryjoin=Address.user_id==User.id) Conguring Many-to-Many Relationships Many-to-many relationships are also declared in the same way with declarative as with traditional mappings. The secondary argument to relationship() is as usual passed a Table object, which is typically declared in the traditional way. The Table usually shares the MetaData object used by the declarative base: keywords = Table( keywords, Base.metadata, Column(author_id, Integer, ForeignKey(authors.id)), Column(keyword_id, Integer, ForeignKey(keywords.id)) ) class Author(Base): __tablename__ = authors id = Column(Integer, primary_key=True) keywords = relationship("Keyword", secondary=keywords) As with traditional mapping, its generally not a good idea to use a Table as the secondary argument which is also mapped to a class, unless the relationship is declared with viewonly=True. Otherwise, the unit-of-work system may attempt duplicate INSERT and DELETE statements against the underlying table. Dening SQL Expressions See SQL Expressions as Mapped Attributes for examples on declaratively mapping attributes to SQL expressions. Table Conguration Table arguments other than the name, metadata, and mapped Column arguments are specied using the __table_args__ class attribute. This attribute accommodates both positional as well as keyword arguments that are normally sent to the Table constructor. The attribute can be specied in one of two forms. One is as a dictionary: 2.10. ORM Extensions 195
class MyClass(Base): __tablename__ = sometable __table_args__ = {mysql_engine:InnoDB} The other, a tuple, where each argument is positional (usually constraints): class MyClass(Base): __tablename__ = sometable __table_args__ = ( ForeignKeyConstraint([id], [remote_table.id]), UniqueConstraint(foo), ) Keyword arguments can be specied with the above form by specifying the last argument as a dictionary: class MyClass(Base): __tablename__ = sometable __table_args__ = ( ForeignKeyConstraint([id], [remote_table.id]), UniqueConstraint(foo), {autoload:True} ) Using a Hybrid Approach with __table__ As an alternative to __tablename__, a direct Table construct may be used. The Column objects, which in this case require their names, will be added to the mapping just like a regular mapping to a table: class MyClass(Base): __table__ = Table(my_table, Base.metadata, Column(id, Integer, primary_key=True), Column(name, String(50)) ) __table__ provides a more focused point of control for establishing table metadata, while still getting most of the benets of using declarative. An application that uses reection might want to load table metadata elsewhere and simply pass it to declarative classes: from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() Base.metadata.reflect(some_engine) class User(Base): __table__ = metadata.tables[user] class Address(Base): __table__ = metadata.tables[address] Some conguration schemes may nd it more appropriate to use __table__, such as those which already take advantage of the data-driven nature of Table to customize and/or automate schema denition. Note that when the __table__ approach is used, the object is immediately usable as a plain Table within the class declaration body itself, as a Python class is only another syntactical block. Below this is illustrated by using the id column in the primaryjoin condition of a relationship(): class MyClass(Base): __table__ = Table(my_table, Base.metadata,
196
Column(id, Integer, primary_key=True), Column(name, String(50)) ) widgets = relationship(Widget, primaryjoin=Widget.myclass_id==__table__.c.id) Similarly, mapped attributes which refer to __table__ can be placed inline, as below where we assign the name column to the attribute _name, generating a synonym for name: from sqlalchemy.ext.declarative import synonym_for class MyClass(Base): __table__ = Table(my_table, Base.metadata, Column(id, Integer, primary_key=True), Column(name, String(50)) ) _name = __table__.c.name @synonym_for("_name") def name(self): return "Name: %s" % _name Mapper Conguration Declarative makes use of the mapper() function internally when it creates the mapping to the declared table. The options for mapper() are passed directly through via the __mapper_args__ class attribute. As always, arguments which reference locally mapped columns can reference them directly from within the class declaration: from datetime import datetime class Widget(Base): __tablename__ = widgets id = Column(Integer, primary_key=True) timestamp = Column(DateTime, nullable=False) __mapper_args__ = { version_id_col: timestamp, version_id_generator: lambda v:datetime.now() } Inheritance Conguration Declarative supports all three forms of inheritance as intuitively as possible. The inherits mapper keyword argument is not needed as declarative will determine this from the class itself. The various polymorphic keyword arguments are specied using __mapper_args__.
197
class Person(Base): __tablename__ = people id = Column(Integer, primary_key=True) discriminator = Column(type, String(50)) __mapper_args__ = {polymorphic_on: discriminator} class Engineer(Person): __tablename__ = engineers __mapper_args__ = {polymorphic_identity: engineer} id = Column(Integer, ForeignKey(people.id), primary_key=True) primary_language = Column(String(50)) Note that above, the Engineer.id attribute, since it shares the same attribute name as the Person.id attribute, will in fact represent the people.id and engineers.id columns together, and will render inside a query as "people.id". To provide the Engineer class with an attribute that represents only the engineers.id column, give it a different attribute name: class Engineer(Person): __tablename__ = engineers __mapper_args__ = {polymorphic_identity: engineer} engineer_id = Column(id, Integer, ForeignKey(people.id), primary_key=True) primary_language = Column(String(50))
198
199
class Employee(AbstractConcreteBase, Base): pass To have a concrete employee table, use ConcreteBase instead: from sqlalchemy.ext.declarative import ConcreteBase class Employee(ConcreteBase, Base): __tablename__ = employee employee_id = Column(Integer, primary_key=True) name = Column(String(50)) __mapper_args__ = { polymorphic_identity:employee, concrete:True} Either Employee base can be used in the normal fashion: class Manager(Employee): __tablename__ = manager employee_id = Column(Integer, primary_key=True) name = Column(String(50)) manager_data = Column(String(40)) __mapper_args__ = { polymorphic_identity:manager, concrete:True} class Engineer(Employee): __tablename__ = engineer employee_id = Column(Integer, primary_key=True) name = Column(String(50)) engineer_info = Column(String(40)) __mapper_args__ = {polymorphic_identity:engineer, concrete:True} Mixin and Custom Base Classes A common need when using declarative is to share some functionality, such as a set of common columns, some common table options, or other mapped properties, across many classes. The standard Python idioms for this is to have the classes inherit from a base which includes these common features. When using declarative, this idiom is allowed via the usage of a custom declarative base class, as well as a mixin class which is inherited from in addition to the primary base. Declarative includes several helper features to make this work in terms of how mappings are declared. An example of some commonly mixed-in idioms is below: from sqlalchemy.ext.declarative import declared_attr class MyMixin(object): @declared_attr def __tablename__(cls): return cls.__name__.lower() __table_args__ = {mysql_engine: InnoDB} __mapper_args__= {always_refresh: True} id = Column(Integer, primary_key=True)
200
class MyModel(MyMixin, Base): name = Column(String(1000)) Where above, the class MyModel will contain an id column as the primary key, a __tablename__ attribute that derives from the name of the class itself, as well as __table_args__ and __mapper_args__ dened by the MyMixin mixin class. Theres no xed convention over whether MyMixin precedes Base or not. Normal Python method resolution rules apply, and the above example would work just as well with: class MyModel(Base, MyMixin): name = Column(String(1000)) This works because Base here doesnt dene any of the variables that MyMixin denes, i.e. __tablename__, __table_args__, id, etc. If the Base did dene an attribute of the same name, the class placed rst in the inherits list would determine which attribute is used on the newly dened class.
from sqlalchemy.ext.declarative import declarative_base Base = declarative_base(cls=Base) class MyModel(Base): name = Column(String(1000)) Where above, MyModel and all other classes that derive from Base will have a table name derived from the class name, an id primary key column, as well as the InnoDB engine for MySQL.
Mixing in Columns
The most basic way to specify a column on a mixin is by simple declaration: class TimestampMixin(object): created_at = Column(DateTime, default=func.now()) class MyModel(TimestampMixin, Base): __tablename__ = test
201
id = Column(Integer, primary_key=True) name = Column(String(1000)) Where above, all declarative classes that include TimestampMixin will also have a column created_at that applies a timestamp to all row insertions. Those familiar with the SQLAlchemy expression language know that the object identity of clause elements denes their role in a schema. Two Table objects a and b may both have a column called id, but the way these are differentiated is that a.c.id and b.c.id are two distinct Python objects, referencing their parent tables a and b respectively. In the case of the mixin column, it seems that only one Column object is explicitly created, yet the ultimate created_at column above must exist as a distinct Python object for each separate destination class. To accomplish this, the declarative extension creates a copy of each Column object encountered on a class that is detected as a mixin. This copy mechanism is limited to simple columns that have no foreign keys, as a ForeignKey itself contains references to columns which cant be properly recreated at this level. For columns that have foreign keys, as well as for the variety of mapper-level constructs that require destination-explicit context, the declared_attr() decorator (renamed from sqlalchemy.util.classproperty in 0.6.5) is provided so that patterns common to many classes can be dened as callables: from sqlalchemy.ext.declarative import declared_attr class ReferenceAddressMixin(object): @declared_attr def address_id(cls): return Column(Integer, ForeignKey(address.id)) class User(ReferenceAddressMixin, Base): __tablename__ = user id = Column(Integer, primary_key=True) Where above, the address_id class-level callable is executed at the point at which the User class is constructed, and the declarative extension can use the resulting Column object as returned by the method without the need to copy it. Columns generated by declared_attr() can also be referenced by __mapper_args__ to a limited degree, currently by polymorphic_on and version_id_col, by specifying the classdecorator itself into the dictionary - the declarative extension will resolve them at class construction time: class MyMixin: @declared_attr def type_(cls): return Column(String(50)) __mapper_args__= {polymorphic_on:type_} class MyModel(MyMixin, Base): __tablename__=test id = Column(Integer, primary_key=True)
Mixing in Relationships
Relationships created by relationship() are provided with declarative mixin classes exclusively using the declared_attr() approach, eliminating any ambiguity which could arise when copying a relationship and its
202
possibly column-bound contents. Below is an example which combines a foreign key column and a relationship so that two classes Foo and Bar can both be congured to reference a common target class via many-to-one: class RefTargetMixin(object): @declared_attr def target_id(cls): return Column(target_id, ForeignKey(target.id)) @declared_attr def target(cls): return relationship("Target") class Foo(RefTargetMixin, Base): __tablename__ = foo id = Column(Integer, primary_key=True) class Bar(RefTargetMixin, Base): __tablename__ = bar id = Column(Integer, primary_key=True) class Target(Base): __tablename__ = target id = Column(Integer, primary_key=True) relationship() denitions which require explicit primaryjoin, order_by etc. expressions should use the string forms for these arguments, so that they are evaluated as late as possible. To reference the mixin class in these expressions, use the given cls to get its name: class RefTargetMixin(object): @declared_attr def target_id(cls): return Column(target_id, ForeignKey(target.id)) @declared_attr def target(cls): return relationship("Target", primaryjoin="Target.id==%s.target_id" % cls.__name__ )
203
204
from sqlalchemy.ext.declarative import declared_attr from sqlalchemy.ext.declarative import has_inherited_table class Tablename(object): @declared_attr def __tablename__(cls): if (has_inherited_table(cls) and Tablename not in cls.__bases__): return None return cls.__name__.lower() class Person(Tablename, Base): id = Column(Integer, primary_key=True) discriminator = Column(type, String(50)) __mapper_args__ = {polymorphic_on: discriminator} # This is single table inheritance class Engineer(Person): primary_language = Column(String(50)) __mapper_args__ = {polymorphic_identity: engineer} # This is joined table inheritance class Manager(Tablename, Person): id = Column(Integer, ForeignKey(person.id), primary_key=True) preferred_recreation = Column(String(50)) __mapper_args__ = {polymorphic_identity: engineer}
205
__declare_last__()
The __declare_last__() hook, introduced in 0.7.3, allows denition of a class level function that is automatically called by the MapperEvents.after_configured() event, which occurs after mappings are assumed to be completed and the congure step has nished: class MyClass(Base): @classmethod def __declare_last__(cls): "" # do something with mappings
__abstract__
__abstract__ is introduced in 0.7.3 and causes declarative to skip the production of a table or mapper for the class entirely. A class can be added within a hierarchy in the same way as mixin (see Mixin and Custom Base Classes), allowing subclasses to extend just from the special class: class SomeAbstractBase(Base): __abstract__ = True def some_helpful_method(self): "" @declared_attr def __mapper_args__(cls): return {"helpful mapper arguments":True} class MyMappedClass(SomeAbstractBase): ""
206
Class Constructor As a convenience feature, the declarative_base() sets a default constructor on classes which takes keyword arguments, and assigns them to the named attributes: e = Engineer(primary_language=python) Sessions Note that declarative does nothing special with sessions, and is only intended as an easier way to congure mappers and Table objects. A typical application setup using scoped_session() might look like: engine = create_engine(postgresql://scott:tiger@localhost/test) Session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine)) Base = declarative_base() Mapped instances then make usage of Session in the usual way. API Reference sqlalchemy.ext.declarative.declarative_base(bind=None, metadata=None, mapper=None, cls=<type object>, name=Base, constructor=<function __init__ at 0x37abe60>, metaclass=<class sqlalchemy.ext.declarative.DeclarativeMeta>) Construct a base class for declarative class denitions. The new base class will be given a metaclass that produces appropriate Table objects and makes the appropriate mapper() calls based on the information provided declaratively in the class and any subclasses of the class. Parameters bind An optional Connectable, will be assigned the bind attribute on the MetaData instance. metadata An optional MetaData instance. All Table objects implicitly declared by subclasses of the base will share this MetaData. A MetaData instance will be created if none is provided. The MetaData instance will be available via the metadata attribute of the generated declarative base class. mapper An optional callable, defaults to mapper(). Will be used to map subclasses to their Tables. cls Defaults to object. A type to use as the base for the generated declarative base class. May be a class or tuple of classes. name Defaults to Base. The display name for the generated class. Customizing this is not required, but can improve clarity in tracebacks and debugging. constructor Defaults to _declarative_constructor(), an __init__ implementation that assigns **kwargs for declared elds and relationships to an instance. If None is supplied, no __init__ will be provided and construction will fall back to cls.__init__ by way of the normal Python semantics. metaclass Defaults to DeclarativeMeta. A metaclass or __metaclass__ compatible callable to use as the meta type of the generated declarative base class.
207
class sqlalchemy.ext.declarative.declared_attr(fget, *arg, **kw) Mark a class-level method as representing the denition of a mapped property or special declarative member name. Note: @declared_attr is available as sqlalchemy.util.classproperty for SQLAlchemy versions 0.6.2, 0.6.3, 0.6.4. @declared_attr turns the attribute into a scalar-like property that can be invoked from the uninstantiated class. Declarative treats attributes specically marked with @declared_attr as returning a construct that is specic to mapping or declarative table conguration. The name of the attribute is that of what the non-dynamic version of the attribute would be. @declared_attr is more often than not applicable to mixins, to dene relationships that are to be applied to different implementors of the class: class ProvidesUser(object): "A mixin that adds a user relationship to classes." @declared_attr def user(self): return relationship("User") It also can be applied to mapped classes, such as to provide a polymorphic scheme for inheritance: class Employee(Base): id = Column(Integer, primary_key=True) type = Column(String(50), nullable=False) @declared_attr def __tablename__(cls): return cls.__name__.lower() @declared_attr def __mapper_args__(cls): if cls.__name__ == Employee: return { "polymorphic_on":cls.type, "polymorphic_identity":"Employee" } else: return {"polymorphic_identity":cls.__name__} sqlalchemy.ext.declarative._declarative_constructor(self, **kwargs) A simple constructor that allows initialization from kwargs. Sets attributes on the constructed instance using the names and values in kwargs. Only keys that are present as attributes of the instances class are allowed. These could be, for example, any mapped columns or relationships. sqlalchemy.ext.declarative.has_inherited_table(cls) Given a class, return True if any of the classes it inherits from has a mapped table, otherwise return False. sqlalchemy.ext.declarative.synonym_for(name, map_column=False) Decorator, make a Python @property a query synonym for a column. A decorator version of synonym(). The function being decorated is the descriptor, otherwise passes its arguments through to synonym():
208
@synonym_for(col) @property def prop(self): return special sauce The regular synonym() is also usable directly in a declarative setting and may be convenient for read/write properties: prop = synonym(col, descriptor=property(_read_prop, _write_prop)) sqlalchemy.ext.declarative.comparable_using(comparator_factory) Decorator, allow a Python @property to be used in query criteria. This is a decorator front end to comparable_property() that passes through the comparator_factory and the function being decorated: @comparable_using(MyComparatorType) @property def prop(self): return special sauce The regular comparable_property() is also usable directly in a declarative setting and may be convenient for read/write properties: prop = comparable_property(MyComparatorType) sqlalchemy.ext.declarative.instrument_declarative(cls, registry, metadata) Given a class, congure the class declaratively, using the given registry, which can be any dictionary, and MetaData object. class sqlalchemy.ext.declarative.AbstractConcreteBase A helper class for concrete declarative mappings. AbstractConcreteBase will use the polymorphic_union() function automatically, against all tables mapped as a subclass to this class. The function is called via the __declare_last__() function, which is essentially a hook for the MapperEvents.after_configured() event. AbstractConcreteBase does not produce a mapped table for the class itself. ConcreteBase, which does. Example: from sqlalchemy.ext.declarative import ConcreteBase class Employee(AbstractConcreteBase, Base): pass class Manager(Employee): __tablename__ = manager employee_id = Column(Integer, primary_key=True) name = Column(String(50)) manager_data = Column(String(40)) __mapper_args__ = { polymorphic_identity:manager, concrete:True} Compare to
209
class sqlalchemy.ext.declarative.ConcreteBase A helper class for concrete declarative mappings. ConcreteBase will use the polymorphic_union() function automatically, against all tables mapped as a subclass to this class. The function is called via the __declare_last__() function, which is essentially a hook for the MapperEvents.after_configured() event. ConcreteBase produces a mapped table for the class itself. Compare to AbstractConcreteBase, which does not. Example: from sqlalchemy.ext.declarative import ConcreteBase class Employee(ConcreteBase, Base): __tablename__ = employee employee_id = Column(Integer, primary_key=True) name = Column(String(50)) __mapper_args__ = { polymorphic_identity:employee, concrete:True} class Manager(Employee): __tablename__ = manager employee_id = Column(Integer, primary_key=True) name = Column(String(50)) manager_data = Column(String(40)) __mapper_args__ = { polymorphic_identity:manager, concrete:True}
210
def process_bind_param(self, value, dialect): if value is not None: value = json.dumps(value) return value def process_result_value(self, value, dialect): if value is not None: value = json.loads(value) return value The usage of json is only for the purposes of example. The sqlalchemy.ext.mutable extension can be used with any type whose target Python type may be mutable, including PickleType, postgresql.ARRAY, etc. When using the sqlalchemy.ext.mutable extension, the value itself tracks all parents which reference it. Here we will replace the usage of plain Python dictionaries with a dict subclass that implements the Mutable mixin: import collections from sqlalchemy.ext.mutable import Mutable class MutationDict(Mutable, dict): @classmethod def coerce(cls, key, value): "Convert plain dictionaries to MutationDict." if not isinstance(value, MutationDict): if isinstance(value, dict): return MutationDict(value) # this call will raise ValueError return Mutable.coerce(key, value) else: return value def __setitem__(self, key, value): "Detect dictionary set events and emit change events." dict.__setitem__(self, key, value) self.changed() def __delitem__(self, key): "Detect dictionary del events and emit change events." dict.__delitem__(self, key) self.changed() The above dictionary class takes the approach of subclassing the Python built-in dict to produce a dict subclass which routes all mutation events through __setitem__. There are many variants on this approach, such as subclassing UserDict.UserDict, the newer collections.MutableMapping, etc. The part thats important to this example is that the Mutable.changed() method is called whenever an in-place change to the datastructure takes place. We also redene the Mutable.coerce() method which will be used to convert any values that are not instances of MutationDict, such as the plain dictionaries returned by the json module, into the appropriate type. Dening this method is optional; we could just as well created our JSONEncodedDict such that it always returns an instance of MutationDict, and additionally ensured that all calling code uses MutationDict explicitly. When Mutable.coerce() is not overridden, any values applied to a parent object which are not instances of the mutable
211
type will raise a ValueError. Our new MutationDict type offers a class method as_mutable() which we can use within column metadata to associate with types. This method grabs the given type object or class and associates a listener that will detect all future mappings of this type, applying event listening instrumentation to the mapped attribute. Such as, with classical table metadata: from sqlalchemy import Table, Column, Integer my_data = Table(my_data, metadata, Column(id, Integer, primary_key=True), Column(data, MutationDict.as_mutable(JSONEncodedDict)) ) Above, as_mutable() returns an instance of JSONEncodedDict (if the type object was not an instance already), which will intercept any attributes which are mapped against this type. Below we establish a simple mapping against the my_data table: from sqlalchemy import mapper class MyDataClass(object): pass # associates mutation listeners with MyDataClass.data mapper(MyDataClass, my_data) The MyDataClass.data member will now be notied of in place changes to its value. Theres no difference in usage when using declarative: from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class MyDataClass(Base): __tablename__ = my_data id = Column(Integer, primary_key=True) data = Column(MutationDict.as_mutable(JSONEncodedDict)) Any in-place changes to the MyDataClass.data member will ag the attribute as dirty on the parent object: >>> from sqlalchemy.orm import Session >>> >>> >>> >>> sess = Session() m1 = MyDataClass(data={value1:foo}) sess.add(m1) sess.commit()
>>> m1.data[value1] = bar >>> assert m1 in sess.dirty True The MutationDict can be associated with all future instances of JSONEncodedDict in one step, using associate_with(). This is similar to as_mutable() except it will intercept all occurrences of MutationDict in all mappings unconditionally, without the need to declare it individually: MutationDict.associate_with(JSONEncodedDict) class MyDataClass(Base):
212
Supporting Pickling
The key to the sqlalchemy.ext.mutable extension relies upon the placement of a weakref.WeakKeyDictionary upon the value object, which stores a mapping of parent mapped objects keyed to the attribute name under which they are associated with this value. WeakKeyDictionary objects are not picklable, due to the fact that they contain weakrefs and function callbacks. In our case, this is a good thing, since if this dictionary were picklable, it could lead to an excessively large pickle size for our value objects that are pickled by themselves outside of the context of the parent. The developer responsiblity here is only to provide a __getstate__ method that excludes the _parents() collection from the pickle stream: class MyMutableType(Mutable): def __getstate__(self): d = self.__dict__.copy() d.pop(_parents, None) return d With our dictionary example, we need to return the contents of the dict itself (and also restore them on __setstate__): class MutationDict(Mutable, dict): # .... def __getstate__(self): return dict(self) def __setstate__(self, state): self.update(state) In the case that our mutable value object is pickled as it is attached to one or more parent objects that are also part of the pickle, the Mutable mixin will re-establish the Mutable._parents collection on each value object as the owning parents themselves are unpickled. Establishing Mutability on Composites Composites are a special ORM feature which allow a single scalar attribute to be assigned an object value which represents information composed from one or more columns from the underlying mapped table. The usual example is that of a geometric point, and is introduced in Composite Column Types. As of SQLAlchemy 0.7, the internals of orm.composite() have been greatly simplied and in-place mutation detection is no longer enabled by default; instead, the user-dened value must detect changes on its own and propagate them to all owning parents. The sqlalchemy.ext.mutable extension provides the helper class MutableComposite, which is a slight variant on the Mutable class. As is the case with Mutable, the user-dened composite class subclasses MutableComposite as a mixin, and detects and delivers change events to its parents via the MutableComposite.changed() method. In the case of a composite class, the detection is usually via the usage of Python descriptors (i.e. @property), or alternatively via the special Python method __setattr__(). Below we expand upon the Point class introduced in Composite Column Types to subclass MutableComposite and to also route attribute set events via __setattr__ to the MutableComposite.changed() method: from sqlalchemy.ext.mutable import MutableComposite class Point(MutableComposite): 2.10. ORM Extensions 213
def __init__(self, x, y): self.x = x self.y = y def __setattr__(self, key, value): "Intercept set events" # set the attribute object.__setattr__(self, key, value) # alert all parents to the change self.changed() def __composite_values__(self): return self.x, self.y def __eq__(self, other): return isinstance(other, Point) and \ other.x == self.x and \ other.y == self.y def __ne__(self, other): return not self.__eq__(other) The MutableComposite class uses a Python metaclass to automatically establish listeners for any usage of orm.composite() that species our Point type. Below, when Point is mapped to the Vertex class, listeners are established which will route change events from Point objects to each of the Vertex.start and Vertex.end attributes: from sqlalchemy.orm import composite, mapper from sqlalchemy import Table, Column vertices = Table(vertices, metadata, Column(id, Integer, primary_key=True), Column(x1, Integer), Column(y1, Integer), Column(x2, Integer), Column(y2, Integer), ) class Vertex(object): pass mapper(Vertex, vertices, properties={ start: composite(Point, vertices.c.x1, vertices.c.y1), end: composite(Point, vertices.c.x2, vertices.c.y2) }) Any in-place changes to the Vertex.start or Vertex.end members will ag the attribute as dirty on the parent object: >>> from sqlalchemy.orm import Session >>> sess = Session() >>> v1 = Vertex(start=Point(3, 4), end=Point(12, 15)) >>> sess.add(v1)
214
Supporting Pickling
As is the case with Mutable, the MutableComposite helper class uses a weakref.WeakKeyDictionary available via the MutableBase._parents() attribute which isnt picklable. If we need to pickle instances of Point or its owning class Vertex, we at least need to dene a __getstate__ that doesnt include the _parents dictionary. Below we dene both a __getstate__ and a __setstate__ that package up the minimal form of our Point class: class Point(MutableComposite): # ... def __getstate__(self): return self.x, self.y def __setstate__(self, state): self.x, self.y = state As with Mutable, the MutableComposite augments the pickling process of the parents object-relational state so that the MutableBase._parents() collection is restored to all Point objects. API Reference class sqlalchemy.ext.mutable.MutableBase Common base class to Mutable and MutableComposite. _parents Dictionary of parent object->attribute name on the parent. This attribute is a so-called memoized property. It initializes itself with a new weakref.WeakKeyDictionary the rst time it is accessed, returning the same object upon subsequent access. class sqlalchemy.ext.mutable.Mutable Bases: sqlalchemy.ext.mutable.MutableBase Mixin that denes transparent propagation of change events to a parent object. See the example in Establishing Mutability on Scalar Column Values for usage information. classmethod as_mutable(sqltype) Associate a SQL type with this mutable Python type. This establishes listeners that will detect ORM mappings against the given type, adding mutation event trackers to those mappings. The type is returned, unconditionally as an instance, so that as_mutable() can be used inline: Table(mytable, metadata, Column(id, Integer, primary_key=True), Column(data, MyMutableType.as_mutable(PickleType)) )
215
Note that the returned type is always an instance, even if a class is given, and that only columns which are declared specically with that type instance receive additional instrumentation. To associate a particular mutable type with all occurrences of a particular type, use the Mutable.associate_with() classmethod of the particular Mutable() subclass to establish a global association. Warning: The listeners established by this method are global to all mappers, and are not garbage collected. Only use as_mutable() for types that are permanent to an application, not with ad-hoc types else this will cause unbounded growth in memory usage. classmethod associate_with(sqltype) Associate this wrapper with all future mapped columns of the given type. This is a convenience method that calls associate_with_attribute automatically. Warning: The listeners established by this method are global to all mappers, and are not garbage collected. Only use associate_with() for types that are permanent to an application, not with ad-hoc types else this will cause unbounded growth in memory usage. classmethod associate_with_attribute(attribute) Establish this type as a mutation listener for the given mapped descriptor. changed() Subclasses should call this method whenever change events occur. class sqlalchemy.ext.mutable.MutableComposite Bases: sqlalchemy.ext.mutable.MutableBase Mixin that denes transparent propagation of change events on a SQLAlchemy composite object to its owning parent or parents. See the example in Establishing Mutability on Composites for usage information. Warning: The listeners established by the MutableComposite class are global to all mappers, and are not garbage collected. Only use MutableComposite for types that are permanent to an application, not with ad-hoc types else this will cause unbounded growth in memory usage. changed() Subclasses should call this method whenever change events occur.
Column(name, String)) bullets_table = Table(Bullets, metadata, Column(id, Integer, primary_key=True), Column(slide_id, Integer, ForeignKey(Slides.id)), Column(position, Integer), Column(text, String)) class Slide(object): pass class Bullet(object): pass mapper(Slide, slides_table, properties={ bullets: relationship(Bullet, order_by=[bullets_table.c.position]) }) mapper(Bullet, bullets_table) The standard relationship mapping will produce a list-like attribute on each Slide containing all related Bullets, but coping with changes in ordering is totally your responsibility. If you insert a Bullet into that list, there is no magicit wont have a position attribute unless you assign it it one, and youll need to manually renumber all the subsequent Bullets in the list to accommodate the insert. An orderinglist can automate this and manage the position attribute on all related bullets for you. mapper(Slide, slides_table, properties={ bullets: relationship(Bullet, collection_class=ordering_list(position), order_by=[bullets_table.c.position]) }) mapper(Bullet, bullets_table) s = Slide() s.bullets.append(Bullet()) s.bullets.append(Bullet()) s.bullets[1].position >>> 1 s.bullets.insert(1, Bullet()) s.bullets[2].position >>> 2 Use the ordering_list function to set up the collection_class on relationships (as in the mapper example above). This implementation depends on the list starting in the proper order, so be SURE to put an order_by on your relationship. Warning: ordering_list only provides limited functionality when a primary key column or unique column is the target of the sort. Since changing the order of entries often means that two rows must trade values, this is not possible when the value is constrained by a primary key or unique constraint, since one of the rows would temporarily have to point to a third available value so that the other row could take its old value. ordering_list doesnt do any of this for you, nor does SQLAlchemy itself. ordering_list takes the name of the related objects ordering attribute as an argument. By default, the zero-based integer index of the objects position in the ordering_list is synchronized with the ordering attribute: index 0 will get position 0, index 1 position 1, etc. To start numbering at 1 or some other integer, provide count_from=1.
217
Ordering values are not limited to incrementing integers. Almost any scheme can implemented by supplying a custom ordering_func that maps a Python list index to any value you require. API Reference sqlalchemy.ext.orderinglist.ordering_list(attr, count_from=None, **kw) Prepares an OrderingList factory for use in mapper denitions. Returns an object suitable for use as an argument to a Mapper relationships collection_class option. Arguments are: attr Name of the mapped attribute to use for storage and retrieval of ordering information count_from (optional) Set up an integer-based ordering, starting at count_from. For example, ordering_list(pos, count_from=1) would create a 1-based list in SQL, storing the value in the pos column. Ignored if ordering_func is supplied. Passes along any keyword arguments to OrderingList constructor.
218
set_shard(shard_id) return a new query, limited to a single shard ID. all subsequent operations with the returned query will be against the single shard regardless of other state.
At the class level, the usual descriptor behavior of returning the descriptor itself is modied by hybrid_property, to instead evaluate the function body given the Interval class as the argument: >>> print Interval.length interval."end" - interval.start >>> print Session().query(Interval).filter(Interval.length > 10) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end FROM interval WHERE interval."end" - interval.start > :param_1 ORM methods such as filter_by() generally use getattr() to locate attributes, so can also be used with hybrid attributes: >>> print Session().query(Interval).filter_by(length=5) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end FROM interval WHERE interval."end" - interval.start = :param_1 The contains() and intersects() methods are decorated with hybrid_method. This decorator applies the same idea to methods which accept zero or more arguments. The above methods return boolean values, and take advantage of the Python | and & bitwise operators to produce equivalent instance-level and SQL expression-level boolean behavior: >>> i1.contains(6) True >>> i1.contains(15) False >>> i1.intersects(Interval(7, 18)) True >>> i1.intersects(Interval(25, 29)) False >>> print Session().query(Interval).filter(Interval.contains(15)) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end FROM interval WHERE interval.start <= :start_1 AND interval."end" > :end_1 >>> ia = aliased(Interval) >>> print Session().query(Interval, ia).filter(Interval.intersects(ia)) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end, interval_1.id AS interval_1_id, interval_1.start AS interval_1_start, interval_1."end" AS interval_1_end FROM interval, interval AS interval_1 WHERE interval.start <= interval_1.start AND interval."end" > interval_1.start OR interval.start <= interval_1."end" AND interval."end" > interval_1."end" Dening Expression Behavior Distinct from Attribute Behavior Our usage of the & and | bitwise operators above was fortunate, considering our functions operated on two boolean values to return a new one. In many cases, the construction of an in-Python function and a SQLAlchemy SQL 220 Chapter 2. SQLAlchemy ORM
expression have enough differences that two separate Python expressions should be dened. The hybrid decorators dene the hybrid_property.expression() modier for this purpose. As an example well dene the radius of the interval, which requires the usage of the absolute value function: from sqlalchemy import func class Interval(object): # ... @hybrid_property def radius(self): return abs(self.length) / 2 @radius.expression def radius(cls): return func.abs(cls.length) / 2 Above the Python function abs() is used for instance-level operations, the SQL function ABS() is used via the func object for class-level expressions: >>> i1.radius 2 >>> print Session().query(Interval).filter(Interval.radius > 5) SELECT interval.id AS interval_id, interval.start AS interval_start, interval."end" AS interval_end FROM interval WHERE abs(interval."end" - interval.start) / :abs_1 > :param_1 Dening Setters Hybrid properties can also dene setter methods. If we wanted length above, when set, to modify the endpoint value: class Interval(object): # ... @hybrid_property def length(self): return self.end - self.start @length.setter def length(self, value): self.end = self.start + value The length(self, value) method is now called upon set: >>> >>> 5 >>> >>> 17 i1 = Interval(5, 10) i1.length i1.length = 12 i1.end
221
Working with Relationships Theres no essential difference when creating hybrids that work with related objects as opposed to column-based data. The need for distinct expressions tends to be greater. Consider the following declarative mapping which relates a User to a SavingsAccount: from from from from sqlalchemy import Column, Integer, ForeignKey, Numeric, String sqlalchemy.orm import relationship sqlalchemy.ext.declarative import declarative_base sqlalchemy.ext.hybrid import hybrid_property
Base = declarative_base() class SavingsAccount(Base): __tablename__ = account id = Column(Integer, primary_key=True) user_id = Column(Integer, ForeignKey(user.id), nullable=False) balance = Column(Numeric(15, 5)) class User(Base): __tablename__ = user id = Column(Integer, primary_key=True) name = Column(String(100), nullable=False) accounts = relationship("SavingsAccount", backref="owner") @hybrid_property def balance(self): if self.accounts: return self.accounts[0].balance else: return None @balance.setter def balance(self, value): if not self.accounts: account = Account(owner=self) else: account = self.accounts[0] account.balance = balance @balance.expression def balance(cls): return SavingsAccount.balance The above hybrid property balance works with the rst SavingsAccount entry in the list of accounts for this user. The in-Python getter/setter methods can treat accounts as a Python list available on self. However, at the expression level, we cant travel along relationships to column attributes directly since SQLAlchemy is explicit about joins. So here, its expected that the User class will be used in an appropriate context such that an appropriate join to SavingsAccount will be present:
>>> print Session().query(User, User.balance).join(User.accounts).filter(User.balance > 500 SELECT "user".id AS user_id, "user".name AS user_name, account.balance AS account_balance FROM "user" JOIN account ON "user".id = account.user_id WHERE account.balance > :balance_1
222
Note however, that while the instance level accessors need to worry about whether self.accounts is even present, this issue expresses itself differently at the SQL expression level, where we basically would use an outer join: >>> from sqlalchemy import or_ >>> print (Session().query(User, User.balance).outerjoin(User.accounts). ... filter(or_(User.balance < 5000, User.balance == None))) SELECT "user".id AS user_id, "user".name AS user_name, account.balance AS account_balance FROM "user" LEFT OUTER JOIN account ON "user".id = account.user_id WHERE account.balance < :balance_1 OR account.balance IS NULL Building Custom Comparators The hybrid property also includes a helper that allows construction of custom comparators. A comparator object allows one to customize the behavior of each SQLAlchemy expression operator individually. They are useful when creating custom types that have some highly idiosyncratic behavior on the SQL side. The example class below allows case-insensitive comparisons on the attribute named word_insensitive: from from from from sqlalchemy.ext.hybrid import Comparator, hybrid_property sqlalchemy import func, Column, Integer, String sqlalchemy.orm import Session sqlalchemy.ext.declarative import declarative_base
Base = declarative_base() class CaseInsensitiveComparator(Comparator): def __eq__(self, other): return func.lower(self.__clause_element__()) == func.lower(other) class SearchWord(Base): __tablename__ = searchword id = Column(Integer, primary_key=True) word = Column(String(255), nullable=False) @hybrid_property def word_insensitive(self): return self.word.lower() @word_insensitive.comparator def word_insensitive(cls): return CaseInsensitiveComparator(cls.word) Above, SQL expressions against word_insensitive will apply the LOWER() SQL function to both sides: >>> print Session().query(SearchWord).filter_by(word_insensitive="Trucks") SELECT searchword.id AS searchword_id, searchword.word AS searchword_word FROM searchword WHERE lower(searchword.word) = lower(:lower_1) The CaseInsensitiveComparator above implements part of the ColumnOperators interface. A coercion operation like lowercasing can be applied to all comparison operations (i.e. eq, lt, gt, etc.) using Operators.operate(): class CaseInsensitiveComparator(Comparator): def operate(self, op, other): return op(func.lower(self.__clause_element__()), func.lower(other))
223
Hybrid Value Objects Note in our previous example, if we were to compare the word_insensitive attribute of a SearchWord instance to a plain Python string, the plain Python string would not be coerced to lower case - the CaseInsensitiveComparator we built, being returned by @word_insensitive.comparator, only applies to the SQL side. A more comprehensive form of the custom comparator is to construct a Hybrid Value Object. This technique applies the target value or expression to a value object which is then returned by the accessor in all cases. The value object allows control of all operations upon the value as well as how compared values are treated, both on the SQL expression side as well as the Python value side. Replacing the previous CaseInsensitiveComparator class with a new CaseInsensitiveWord class: class CaseInsensitiveWord(Comparator): "Hybrid value representing a lower case representation of a word." def __init__(self, word): if isinstance(word, basestring): self.word = word.lower() elif isinstance(word, CaseInsensitiveWord): self.word = word.word else: self.word = func.lower(word) def operate(self, op, other): if not isinstance(other, CaseInsensitiveWord): other = CaseInsensitiveWord(other) return op(self.word, other.word) def __clause_element__(self): return self.word def __str__(self): return self.word key = word "Label to apply to Query tuple results" Above, the CaseInsensitiveWord object represents self.word, which may be a SQL function, or may be a Python native. By overriding operate() and __clause_element__() to work in terms of self.word, all comparison operations will work against the converted form of word, whether it be SQL side or Python side. Our SearchWord class can now deliver the CaseInsensitiveWord object unconditionally from a single hybrid call: class SearchWord(Base): __tablename__ = searchword id = Column(Integer, primary_key=True) word = Column(String(255), nullable=False) @hybrid_property def word_insensitive(self): return CaseInsensitiveWord(self.word) The word_insensitive attribute now has case-insensitive comparison behavior universally, including SQL expression vs. Python expression (note the Python value is converted to lower case on the Python side here): >>> print Session().query(SearchWord).filter_by(word_insensitive="Trucks") SELECT searchword.id AS searchword_id, searchword.word AS searchword_word
224
FROM searchword WHERE lower(searchword.word) = :lower_1 SQL expression versus SQL expression:
>>> sw1 = aliased(SearchWord) >>> sw2 = aliased(SearchWord) >>> print Session().query(sw1.word_insensitive, sw2.word_insensitive).filter(sw1.word_insen SELECT lower(searchword_1.word) AS lower_1, lower(searchword_2.word) AS lower_2 FROM searchword AS searchword_1, searchword AS searchword_2 WHERE lower(searchword_1.word) > lower(searchword_2.word) Python only expression: >>> ws1 = SearchWord(word="SomeWord") >>> ws1.word_insensitive == "sOmEwOrD" True >>> ws1.word_insensitive == "XOmEwOrX" False >>> print ws1.word_insensitive someword The Hybrid Value pattern is very useful for any kind of value that may have multiple representations, such as timestamps, time deltas, units of measurement, currencies and encrypted passwords. API Reference class sqlalchemy.ext.hybrid.hybrid_method(func, expr=None) A decorator which allows denition of a Python object method with both instance-level and class-level behavior. __init__(func, expr=None) Create a new hybrid_method. Usage is typically via decorator: from sqlalchemy.ext.hybrid import hybrid_method class SomeClass(object): @hybrid_method def value(self, x, y): return self._value + x + y @value.expression def value(self, x, y): return func.some_function(self._value, x, y) expression(expr) Provide a modifying decorator that denes a SQL-expression producing method. class sqlalchemy.ext.hybrid.hybrid_property(fget, fset=None, fdel=None, expr=None) A decorator which allows denition of a Python descriptor with both instance-level and class-level behavior. __init__(fget, fset=None, fdel=None, expr=None) Create a new hybrid_property. Usage is typically via decorator:
225
from sqlalchemy.ext.hybrid import hybrid_property class SomeClass(object): @hybrid_property def value(self): return self._value @value.setter def value(self, value): self._value = value comparator(comparator) Provide a modifying decorator that denes a custom comparator producing method. The return value of the decorated method should be an instance of Comparator. deleter(fdel) Provide a modifying decorator that denes a value-deletion method. expression(expr) Provide a modifying decorator that denes a SQL-expression producing method. setter(fset) Provide a modifying decorator that denes a value-setter method. class sqlalchemy.ext.hybrid.Comparator(expression) Bases: sqlalchemy.orm.interfaces.PropComparator A helper class that allows easy construction of custom PropComparator classes for usage with hybrids.
2.10.7 SqlSoup
Note: SQLSoup will no longer be included with SQLAlchemy as of 0.8. Look for a third party project replicating its functionality soon. Introduction SqlSoup provides a convenient way to access existing database tables without having to declare table or mapper classes ahead of time. It is built on top of the SQLAlchemy ORM and provides a super-minimalistic interface to an existing database. SqlSoup effectively provides a coarse grained, alternative interface to working with the SQLAlchemy ORM, providing a self conguring interface for extremely rudimental operations. Its somewhat akin to a super novice mode version of the ORM. While SqlSoup can be very handy, users are strongly encouraged to use the full ORM for nontrivial applications. Suppose we have a database with users, books, and loans tables (corresponding to the PyWebOff dataset, if youre curious). Creating a SqlSoup gateway is just like creating an SQLAlchemy engine: >>> from sqlalchemy.ext.sqlsoup import SqlSoup >>> db = SqlSoup(sqlite:///:memory:) or, you can re-use an existing engine: >>> db = SqlSoup(engine) You can optionally specify a schema within the database for your SqlSoup: 226 Chapter 2. SQLAlchemy ORM
>>> db.schema = myschemaname Loading objects Loading objects is as easy as this: >>> users = db.users.all() >>> users.sort() >>> users [ MappedUsers(name=uJoe Student,[email protected], password=ustudent,classname=None,admin=0), MappedUsers(name=uBhargan Basepair,[email protected], password=ubasepair,classname=None,admin=1) ] Of course, letting the database do the sort is better: >>> db.users.order_by(db.users.name).all() [ MappedUsers(name=uBhargan Basepair,[email protected], password=ubasepair,classname=None,admin=1), MappedUsers(name=uJoe Student,[email protected], password=ustudent,classname=None,admin=0) ] Field access is intuitive: >>> users[0].email [email protected] Of course, you dont want to load all users very often. Lets add a WHERE clause. Lets also switch the order_by to DESC while were at it: >>> from sqlalchemy import or_, and_, desc >>> where = or_(db.users.name==Bhargan Basepair, [email protected]) >>> db.users.filter(where).order_by(desc(db.users.name)).all() [ MappedUsers(name=uJoe Student,[email protected], password=ustudent,classname=None,admin=0), MappedUsers(name=uBhargan Basepair,[email protected], password=ubasepair,classname=None,admin=1) ] You can also use .rst() (to retrieve only the rst object from a query) or .one() (like .rst when you expect exactly one user it will raise an exception if more were returned): >>> db.users.filter(db.users.name==Bhargan Basepair).one() MappedUsers(name=uBhargan Basepair,[email protected], password=ubasepair,classname=None,admin=1) Since name is the primary key, this is equivalent to >>> db.users.get(Bhargan Basepair) MappedUsers(name=uBhargan Basepair,[email protected], password=ubasepair,classname=None,admin=1) This is also equivalent to
227
>>> db.users.filter_by(name=Bhargan Basepair).one() MappedUsers(name=uBhargan Basepair,[email protected], password=ubasepair,classname=None,admin=1) lter_by is like lter, but takes kwargs instead of full clause expressions. This makes it more concise for simple queries like this, but you cant do complex queries like the or_ above or non-equality based comparisons this way.
228
>>> join1 = db.join(db.users, db.loans, isouter=True) >>> join1.filter_by(name=Joe Student).all() [ MappedJoin(name=uJoe Student,[email protected], password=ustudent,classname=None,admin=0,book_id=1, user_name=uJoe Student,loan_date=datetime.datetime(2006, 7, 12, 0, 0)) ] If youre unfortunate enough to be using MySQL with the default MyISAM storage engine, youll have to specify the join condition manually, since MyISAM does not store foreign keys. Heres the same join again, with the join condition explicitly specied: >>> db.join(db.users, db.loans, db.users.name==db.loans.user_name, isouter=True) <class sqlalchemy.ext.sqlsoup.MappedJoin> You can compose arbitrarily complex joins by combining Join objects with tables or other joins. Here we combine our rst join with the books table: >>> join2 = db.join(join1, db.books) >>> join2.all() [ MappedJoin(name=uJoe Student,[email protected], password=ustudent,classname=None,admin=0,book_id=1, user_name=uJoe Student,loan_date=datetime.datetime(2006, 7, 12, 0, 0), id=1,title=uMustards I Have Known,published_year=u1989, authors=uJones) ] If you join tables that have an identical column name, wrap your join with with_labels, to disambiguate columns with their table name (.c is short for .columns): >>> db.with_labels(join1).c.keys() [uusers_name, uusers_email, uusers_password, uusers_classname, uusers_admin, uloans_book_id, uloans_user_name, uloans_loan_date] You can also join directly to a labeled object: >>> labeled_loans = db.with_labels(db.loans) >>> db.join(db.users, labeled_loans, isouter=True).c.keys() [uname, uemail, upassword, uclassname, uadmin, uloans_book_id, uloans_user_name, uloans_loan_date] Relationships You can dene relationships on SqlSoup classes: >>> db.users.relate(loans, db.loans) These can then be used like a normal SA property: >>> db.users.get(Joe Student).loans [MappedLoans(book_id=1,user_name=uJoe Student, loan_date=datetime.datetime(2006, 7, 12, 0, 0))] >>> db.users.filter(~db.users.loans.any()).all() [MappedUsers(name=uBhargan Basepair, [email protected], password=ubasepair,classname=None,admin=1)]
229
relate can take any options that the relationship function accepts in normal mapper denition:
>>> del db._cache[users] >>> db.users.relate(loans, db.loans, order_by=db.loans.loan_date, cascade=all, delete-or Advanced Use
>>> from sqlalchemy.orm import scoped_session, sessionmaker >>> db = SqlSoup(sqlite://, session=scoped_session(sessionmaker(autoflush=False, expire_o
>>> from sqlalchemy import select, func >>> b = db.books._table >>> s = select([b.c.published_year, func.count(*).label(n)], from_obj=[b], group_by=[b. >>> s = s.alias(years_with_count) >>> years_with_count = db.map(s, primary_key=[s.c.published_year]) >>> years_with_count.filter_by(published_year=1989).all() [MappedBooks(published_year=u1989,n=1)]
230
Obviously if we just wanted to get a list of counts associated with book years once, raw SQL is going to be less work. The advantage of mapping a Select is reusability, both standalone and in Joins. (And if you go to full SQLAlchemy, you can perform mappings like this directly to your object models.) An easy way to save mapped selectables like this is to just hang them on your db object: >>> db.years_with_count = years_with_count Python is exible like that!
Raw SQL
SqlSoup works ne with SQLAlchemys text construct, described in Using Text. You can also execute textual SQL directly using the execute() method, which corresponds to the execute() method on the underlying Session. Expressions here are expressed like text() constructs, using named parameters with colons:
>>> rp = db.execute(select name, email from users where name like :name order by name, na >>> for name, email in rp.fetchall(): print name, email Bhargan Basepair [email protected] Or you can get at the current transactions connection using connection(). This is the raw connection object which can accept any sort of SQL expression or raw SQL string passed to the database:
>>> conn = db.connection() >>> conn.execute("select name, email from users where name like ? order by name", %Bharg
__init__(engine_or_metadata, base=<type object>, session=None) Initialize a new SqlSoup. Parameters engine_or_metadata a string database URL, Engine or MetaData object to associate with. If the argument is a MetaData, it should be bound to an Engine. base a class which will serve as the default class for returned mapped classes. Defaults to object. session a ScopedSession or Session with which to associate ORM operations for this SqlSoup instance. If None, a ScopedSession thats local to this module is used. bind The Engine associated with this SqlSoup.
231
clear() Synonym for SqlSoup.expunge_all(). commit() Commit the current transaction. See Session.commit(). connection() Return the current Connection in use by the current transaction. delete(instance) Mark an instance as deleted. engine The Engine associated with this SqlSoup. entity(attr, schema=None) Return the named entity from this SqlSoup, or create if not present. For more generalized mapping, see map_to(). execute(stmt, **params) Execute a SQL statement. The statement may be a string SQL string, an expression.select() construct, or an expression.text() construct. expunge(instance) Remove an instance from the Session. See Session.expunge(). expunge_all() Clear all objects from the current Session. See Session.expunge_all(). flush() Flush pending changes to the database. See Session.flush(). join(left, right, onclause=None, isouter=False, base=None, **mapper_args) Create an expression.join() and map to it. The class and its mapping are not cached and will be discarded once dereferenced (as of 0.6.6). Parameters left a mapped class or table object. right a mapped class or table object. onclause optional ON clause construct.. isouter if True, the join will be an OUTER join. base a Python class which will be used as the base for the mapped class. If None, the base argument specied by this SqlSoup instances constructor will be used, which defaults to object. mapper_args Dictionary of arguments which will be passed directly to orm.mapper().
232
map(selectable, base=None, **mapper_args) Map a selectable directly. The class and its mapping are not cached and will be discarded once dereferenced (as of 0.6.6). Parameters selectable an expression.select() construct. base a Python class which will be used as the base for the mapped class. If None, the base argument specied by this SqlSoup instances constructor will be used, which defaults to object. mapper_args Dictionary of arguments which will be passed directly to orm.mapper(). map_to(attrname, tablename=None, selectable=None, per_args=immutabledict({})) Congure a mapping to the given attrname. schema=None, base=None, map-
This is the master method that can be used to create any conguration. (new in 0.6.6) Parameters attrname String attribute name which will be established as an attribute on this :class:..SqlSoup instance. base a Python class which will be used as the base for the mapped class. If None, the base argument specied by this SqlSoup instances constructor will be used, which defaults to object. mapper_args Dictionary of arguments which will be passed directly to orm.mapper(). tablename String name of a Table to be reected. If a Table is already available, use the selectable argument. This argument is mutually exclusive versus the selectable argument. selectable a Table, Join, or Select object which will be mapped. This argument is mutually exclusive versus the tablename argument. schema String schema name to use if the tablename argument is present. rollback() Rollback the current transction. See Session.rollback(). with_labels(selectable, base=None, **mapper_args) Map a selectable directly, wrapping the selectable in a subquery with labels. The class and its mapping are not cached and will be discarded once dereferenced (as of 0.6.6). Parameters selectable an expression.select() construct. base a Python class which will be used as the base for the mapped class. If None, the base argument specied by this SqlSoup instances constructor will be used, which defaults to object. mapper_args Dictionary of arguments which will be passed directly to orm.mapper().
233
2.11 Examples
The SQLAlchemy distribution includes a variety of code examples illustrating a select set of patterns, some typical and some not so typical. All are runnable and can be found in the /examples directory of the distribution. Each example contains a README in its __init__.py le, each of which are listed below. Additional SQLAlchemy examples, some user http://www.sqlalchemy.org/trac/wiki/UsageRecipes. contributed, are available on the wiki at
2.11.2 Associations
Location: /examples/association/ Examples illustrating the usage of the association object pattern, where an intermediary class mediates the relationship between two classes that are associated in a many-to-many pattern. This directory includes the following examples: basic_association.py - illustrate a many-to-many relationship between an Order and a collection of Item objects, associating a purchase price with each via an association object called OrderItem proxied_association.py same example as basic_association, adding in usage sqlalchemy.ext.associationproxy to make explicit references to OrderItem optional. of
dict_of_sets_with_default.py - an advanced association proxy example which illustrates nesting of association proxies to produce multi-level Python collections, in this case a dictionary with string keys and sets of integers as values, which conceal the underlying mapped classes.
234
local_session_caching.py - Grok everything so far ? This example creates a new Beaker container that will persist data in a dictionary which is local to the current session. remove() the session and the cache is gone.
236
a function which can return a list of shard ids to try, given a particular Query (query_chooser). If it returns all shard ids, all shards will be queried and the results joined together. In this example, four sqlite databases will store information about weather data on a database-per-continent basis. We provide example shard_chooser, id_chooser and query_chooser functions. The query_chooser illustrates inspection of the SQL expression element in order to attempt to determine a single shard being requested. The construction of generic sharding routines is an ambitious approach to the issue of organizing instances among multiple databases. For a more plain-spoken alternative, the distinct entity approach is a simple method of assigning objects to different tables (and potentially database nodes) in an explicit way - described on the wiki at EntityName.
a standalone operator example. The implementation is limited to only public, well known and simple to use extension points. E.g.: print session.query(Road).filter(Road.road_geom.intersects(r1.road_geom)).all()
238
Base = declarative_base(bind=engine) class SomeClass(Base): __tablename__ = sometable # ... class SomeVersionedClass(Base): __metaclass__ = VersionedMeta __tablename__ = someothertable # ... The VersionedMeta is a declarative metaclass - to use the extension with plain mappers, the _history_mapper function can be applied: from history_meta import _history_mapper m = mapper(SomeClass, sometable) _history_mapper(m) SomeHistoryClass = SomeClass.__history_mapper__.class_
2.11. Examples
239
directly, so are compatible with the native cElementTree as well as lxml, and can be adapted to suit any kind of DOM representation system. Querying along xpath-like strings is illustrated as well. In order of complexity: pickle.py - Quick and dirty, serialize the whole DOM into a BLOB column. While the example is very brief, it has very limited functionality. adjacency_list.py - Each DOM node is stored in an individual table row, with attributes represented in a separate table. The nodes are associated in a hierarchy using an adjacency list structure. A query function is introduced which can search for nodes along any path with a given structure of attributes, basically a (very narrow) subset of xpath. optimized_al.py - Uses the same strategy as adjacency_list.py, but associates each DOM row with its owning document row, so that a full document of DOM nodes can be loaded using O(1) queries - the construction of the hierarchy is performed after the load in a non-recursive fashion and is much more efcient. E.g.: # parse an XML file and persist in the database doc = ElementTree.parse("test.xml") session.add(Document(file, doc)) session.commit() # locate documents with a certain path/attribute structure for document in find_document(/somefile/header/field2[@attr=foo]): # dump the XML print document
240
A single mapper can maintain a chain of MapperExtension objects. When a particular mapping event occurs, the corresponding method on each MapperExtension is invoked serially, and each method has the ability to halt the chain from proceeding further: m = mapper(User, users_table, extension=[ext1, ext2, ext3]) Each MapperExtension method returns the symbol EXT_CONTINUE by default. This symbol generally means move to the next MapperExtension for processing. For methods that return objects like translated rows or new object instances, EXT_CONTINUE means the result of the method should be ignored. In some cases its required for a default mapper activity to be performed, such as adding a new instance to a result list. The symbol EXT_STOP has signicance within a chain of MapperExtension objects that the chain will be stopped when this symbol is returned. Like EXT_CONTINUE, it also has additional signicance in some cases that a default mapper activity will not be performed. after_delete(mapper, connection, instance) Receive an object instance after that instance is deleted. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. after_insert(mapper, connection, instance) Receive an object instance after that instance is inserted. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. after_update(mapper, connection, instance) Receive an object instance after that instance is updated. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. append_result(mapper, selectcontext, row, instance, result, **ags) Receive an object instance before that instance is appended to a result list. If this method returns EXT_CONTINUE, result appending will proceed normally. if this method returns any other value or None, result appending will not proceed for this instance, giving this extension an opportunity to do the appending itself, if desired. mapper The mapper doing the operation. selectcontext The QueryContext generated from the Query. row The result row from the database. instance The object instance to be appended to the result. result List to which results are being appended. **ags extra information about the row, same as criterion in create_row_processor() method of MapperProperty before_delete(mapper, connection, instance) Receive an object instance before that instance is deleted. Note that no changes to the overall ush plan can be made here; and manipulation of the Session will not have the desired effect. To manipulate the Session within an extension, use SessionExtension. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method.
241
before_insert(mapper, connection, instance) Receive an object instance before that instance is inserted into its table. This is a good place to set up primary key values and such that arent handled otherwise. Column-based attributes can be modied within this method which will result in the new value being inserted. However no changes to the overall ush plan can be made, and manipulation of the Session will not have the desired effect. To manipulate the Session within an extension, use SessionExtension. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. before_update(mapper, connection, instance) Receive an object instance before that instance is updated. Note that this method is called for all instances that are marked as dirty, even those which have no net changes to their column-based attributes. An object is marked as dirty when any of its column-based attributes have a set attribute operation called or when any of its collections are modied. If, at update time, no column-based attributes have any net changes, no UPDATE statement will be issued. This means that an instance being sent to before_update is not a guarantee that an UPDATE statement will be issued (although you can affect the outcome here). To detect if the column-based attributes on the object have net changes, and will therefore generate an UPDATE statement, use object_session(instance).is_modified(instance, include_collections=False). Column-based attributes can be modied within this method which will result in the new value being updated. However no changes to the overall ush plan can be made, and manipulation of the Session will not have the desired effect. To manipulate the Session within an extension, use SessionExtension. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. create_instance(mapper, selectcontext, row, class_) Receive a row when a new object instance is about to be created from that row. The method can choose to create the instance itself, or it can return EXT_CONTINUE to indicate normal object creation should take place. mapper The mapper doing the operation selectcontext The QueryContext generated from the Query. row The result row from the database class_ The class we are mapping. return value A new object instance, or EXT_CONTINUE init_failed(mapper, class_, oldinit, instance, args, kwargs) Receive an instance when its constructor has been called, and raised an exception. This method is only called during a userland construction of an object. It is not called when an object is loaded from the database. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. init_instance(mapper, class_, oldinit, instance, args, kwargs) Receive an instance when its constructor is called. This method is only called during a userland construction of an object. It is not called when an object is loaded from the database.
242
The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. instrument_class(mapper, class_) Receive a class when the mapper is rst constructed, and has applied instrumentation to the mapped class. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. populate_instance(mapper, selectcontext, row, instance, **ags) Receive an instance before that instance has its attributes populated. This usually corresponds to a newly loaded instance but may also correspond to an already-loaded instance which has unloaded attributes to be populated. The method may be called many times for a single instance, as multiple result rows are used to populate eagerly loaded collections. If this method returns EXT_CONTINUE, instance population will proceed normally. If any other value or None is returned, instance population will not proceed, giving this extension an opportunity to populate the instance itself, if desired. As of 0.5, most usages of this hook are obsolete. For a generic object has been newly created from a row hook, use reconstruct_instance(), or the @orm.reconstructor decorator. reconstruct_instance(mapper, instance) Receive an object instance after it has been created via __new__, and after initial attribute population has occurred. This typically occurs when the instance is created based on incoming result rows, and is only called once for that instances lifetime. Note that during a result-row load, this method is called upon the rst row received for this instance. Note that some attributes and collections may or may not be loaded or even initialized, depending on whats present in the result rows. The return value is only signicant within the MapperExtension chain; the parent mappers behavior isnt modied by this method. translate_row(mapper, context, row) Perform pre-processing on the given result row and return a new row instance. This is called when the mapper rst receives a row, before the object identity or the instance itself has been derived from that row. The given row may or may not be a RowProxy object - it will always be a dictionary-like object which contains mapped columns as keys. The returned object should also be a dictionary-like object which recognizes mapped columns as keys. If the ultimate return value is EXT_CONTINUE, the row is not translated.
Subclasses may be installed into a Session (or sessionmaker()) using the extension keyword argument: from sqlalchemy.orm.interfaces import SessionExtension class MySessionExtension(SessionExtension): 2.12. Deprecated ORM Event Interfaces 243
def before_commit(self, session): print "before commit!" Session = sessionmaker(extension=MySessionExtension()) The same SessionExtension instance can be used with any number of sessions. after_attach(session, instance) Execute after an instance is attached to a session. This is called after an add, delete or merge. after_begin(session, transaction, connection) Execute after a transaction is begun on a connection transaction is the SessionTransaction. This method is called after an engine level transaction is begun on a connection. after_bulk_delete(session, query, query_context, result) Execute after a bulk delete operation to the session. This is called after a session.query(...).delete() query is the query object that this delete operation was called on. query_context was the query context object. result is the result object returned from the bulk operation. after_bulk_update(session, query, query_context, result) Execute after a bulk update operation to the session. This is called after a session.query(...).update() query is the query object that this update operation was called on. query_context was the query context object. result is the result object returned from the bulk operation. after_commit(session) Execute after a commit has occurred. Note that this may not be per-ush if a longer running transaction is ongoing. after_flush(session, ush_context) Execute after ush has completed, but before commit has been called. Note that the sessions state is still in pre-ush, i.e. new, dirty, and deleted lists still show pre-ush state as well as the history settings on instance attributes. after_flush_postexec(session, ush_context) Execute after ush has completed, and after the post-exec state occurs. This will be when the new, dirty, and deleted lists are in their nal state. An actual commit() may or may not have occurred, depending on whether or not the ush started its own transaction or participated in a larger transaction. after_rollback(session) Execute after a rollback has occurred. Note that this may not be per-ush if a longer running transaction is ongoing. before_commit(session) Execute right before commit is called. Note that this may not be per-ush if a longer running transaction is ongoing. before_flush(session, ush_context, instances) Execute before ush process has started.
244
instances is an optional list of objects which were passed to the flush() method.
AttributeExtension is used to listen for set, remove, and append events on individual mapped attributes. It is established on an individual mapped attribute using the extension argument, available on column_property(), relationship(), and others: from sqlalchemy.orm.interfaces import AttributeExtension from sqlalchemy.orm import mapper, relationship, column_property class MyAttrExt(AttributeExtension): def append(self, state, value, initiator): print "append event !" return value def set(self, state, value, oldvalue, initiator): print "set event !" return value mapper(SomeClass, sometable, properties={ foo:column_property(sometable.c.foo, extension=MyAttrExt()), bar:relationship(Bar, extension=MyAttrExt()) }) Note that the AttributeExtension methods append() and set() need to return the value parameter. The returned value is used as the effective value, and allows the extension to change what is ultimately persisted. AttributeExtension is assembled within the descriptors associated with a mapped class. active_history indicates that the set() method would like to receive the old value, even if it means ring lazy callables. Note that active_history can also be set directly via column_property() and relationship(). append(state, value, initiator) Receive a collection append event. The returned value will be used as the actual value to be appended. remove(state, value, initiator) Receive a remove event. No return value is dened. set(state, value, oldvalue, initiator) Receive a set event. The returned value will be used as the actual value to be set.
245
246
An mapping operation was requested for an unknown class. exception sqlalchemy.orm.exc.UnmappedColumnError Bases: sqlalchemy.exc.InvalidRequestError Mapping operation was requested on an unknown column. exception sqlalchemy.orm.exc.UnmappedError Bases: sqlalchemy.exc.InvalidRequestError Base for exceptions that involve expected mappings not present. exception sqlalchemy.orm.exc.UnmappedInstanceError(obj, msg=None) Bases: sqlalchemy.orm.exc.UnmappedError An mapping operation was requested for an unknown instance.
247
*columns The list of columns describes a single object property. If there are multiple tables joined together for the mapper, this list represents the equivalent column as it appears across each table. group deferred comparator_factory descriptor expire_on_ush extension class sqlalchemy.orm.descriptor_props.CompositeProperty(class_, *attrs, **kwargs) Bases: sqlalchemy.orm.descriptor_props.DescriptorProperty do_init() Initialization which occurs after the CompositeProperty has been associated with its parent mapper. get_history(state, dict_, passive=<symbol PASSIVE_OFF>) Provided for userland code that uses attributes.get_history(). class sqlalchemy.orm.state.InstanceState(obj, manager) Bases: object tracks state information at the instance level. commit(dict_, keys) Commit attributes. This is used by a partial-attribute load operation to mark committed those attributes which were refreshed from the database. Attributes marked as expired can potentially remain expired after this step if a value was not populated in state.dict. commit_all(dict_, instance_dict=None) commit all attributes unconditionally. This is used after a ush() or a full load/refresh to remove all pending state from the instance. all attributes are marked as committed the strong dirty reference is removed the modied ag is set to False any expired markers/callables for attributes loaded are removed. Attributes marked as expired can potentially remain expired after this step if a value was not populated in state.dict. expire_attribute_pre_commit(dict_, key) a fast expire that can be called by column loaders during a load. The additional bookkeeping is nished up in commit_all(). This method is actually called a lot with joined-table loading, when the second table isnt present in the result. expired_attributes Return the set of keys which are expired to be loaded by the managers deferred scalar loader, assuming no pending changes.
248
see also the unmodified collection which is intersected against this set when a refresh operation occurs. initialize(key) Set this attribute to an empty value or collection, based on the AttributeImpl in use. reset(dict_, key) Remove the given attribute and any callables associated with it. set_callable(dict_, key, callable_) Remove the given attribute and set the given callable as a loader. unloaded Return the set of keys which do not have a loaded value. This includes expired attributes and any other attribute that was never populated or modied. unmodified Return the set of keys which have no uncommitted changes unmodified_intersection(keys) Return self.unmodied.intersection(keys). value_as_iterable(dict_, key, passive=<symbol PASSIVE_OFF>) Return a list of tuples (state, obj) for the given key. returns an empty list if the value is None/empty/PASSIVE_NO_RESULT class sqlalchemy.orm.interfaces.MapperProperty Bases: object Manage the relationship of a Mapper to a single class attribute, as well as that attribute as it appears on individual instances of the class, including attribute instrumentation, attribute access, loading behavior, and dependency calculations. The most common occurrences of MapperProperty are the mapped Column, which is represented in a mapping as an instance of ColumnProperty, and a reference to another class produced by relationship(), represented in the mapping as an instance of RelationshipProperty. cascade The set of cascade attribute names. This collection is checked before the cascade_iterator method is called. cascade_iterator(type_, state, visited_instances=None, halt_on=None) Iterate through instances related to the given instance for a particular cascade, starting with this MapperProperty. Return an iterator3-tuples (instance, mapper, state). Note that the cascade collection on this MapperProperty is checked rst for the given type before cascade_iterator is called. See PropertyLoader for the related instance implementation. class_attribute Return the class-bound descriptor corresponding to this MapperProperty. compare(operator, value, **kw) Return a compare operation for the columns represented by this MapperProperty to the given value, which may be a column value or an instance. operator is an operator from the operators module, or from sql.Comparator. By default uses the PropComparator attached to this MapperProperty under the attribute name comparator.
249
create_row_processor(context, path, reduced_path, mapper, row, adapter) Return a 3-tuple consisting of three row processing functions. do_init() Perform subclass-specic initialization post-mapper-creation steps. This is a template method called by the MapperProperty objects init() method. init() Called after all mappers are created to assemble relationships between mappers and perform other postmapper-creation initialization steps. is_primary() Return True if this MapperPropertys mapper is the primary mapper for its class. This ag is used to indicate that the MapperProperty can dene attribute instrumentation for the class at the class level (as opposed to the individual instance level). merge(session, source_state, source_dict, dest_state, dest_dict, load, _recursive) Merge the attribute represented by this MapperProperty from source to destination object post_instrument_class(mapper) Perform instrumentation adjustments that need to occur after init() has completed. setup(context, entity, path, reduced_path, adapter, **kwargs) Called by Query for the purposes of constructing a SQL statement. Each MapperProperty associated with the target mapper processes the statement referenced by the query context, adding columns and/or criterion as appropriate. class sqlalchemy.orm.interfaces.PropComparator(prop, mapper, adapter=None) Bases: sqlalchemy.sql.operators.ColumnOperators Denes comparison operations for MapperProperty objects. User-dened subclasses of PropComparator may be created. The built-in Python comparison and math operator methods, such as __eq__(), __lt__(), __add__(), can be overridden to provide new operator behavior. The custom PropComparator is passed to the mapper property via the comparator_factory argument. In each case, the appropriate subclass of PropComparator should be used: from sqlalchemy.orm.properties import \ ColumnProperty,\ CompositeProperty,\ RelationshipProperty class MyColumnComparator(ColumnProperty.Comparator): pass class MyCompositeComparator(CompositeProperty.Comparator): pass class MyRelationshipComparator(RelationshipProperty.Comparator): pass adapted(adapter) Return a copy of this PropComparator which will use the given adaption function on the local side of generated expressions. any(criterion=None, **kwargs) Return true if this collection contains any member that meets the given criterion.
250
The usual implementation of any() is RelationshipProperty.Comparator.any(). Parameters criterion an optional ClauseElement formulated against the member class table or attributes. **kwargs key/value pairs corresponding to member class attribute names which will be compared via equality to the corresponding values. has(criterion=None, **kwargs) Return true if this element references a member which meets the given criterion. The usual implementation of has() is RelationshipProperty.Comparator.has(). Parameters criterion an optional ClauseElement formulated against the member class table or attributes. **kwargs key/value pairs corresponding to member class attribute names which will be compared via equality to the corresponding values. of_type(class_) Redene this object in terms of a polymorphic subclass. Returns a new PropComparator from which further criterion can be evaluated. e.g.: query.join(Company.employees.of_type(Engineer)).\ filter(Engineer.name==foo) Parameters class_ a class or mapper indicating that criterion will be against this specic subclass. class sqlalchemy.orm.properties.RelationshipProperty(argument, secondary=None, primaryjoin=None, secondaryjoin=None, foreign_keys=None, uselist=None, order_by=False, backref=None, back_populates=None, post_update=False, cascade=False, extension=None, viewonly=False, lazy=True, collection_class=None, passive_deletes=False, passive_updates=True, remote_side=None, enable_typechecks=True, join_depth=None, comparator_factory=None, single_parent=False, innerjoin=False, doc=None, active_history=False, cascade_backrefs=True, load_on_pending=False, strategy_class=None, _local_remote_pairs=None, query_class=None) 2.14. ORM Internals 251
Bases: sqlalchemy.orm.interfaces.StrategizedProperty Describes an object property that holds a single item or list of items that correspond to a related database table. Public constructor is the orm.relationship() function. Of note here is the RelationshipProperty.Comparator class, which implements comparison operations for scalar- and collection-referencing mapped attributes. class Comparator(prop, mapper, of_type=None, adapter=None) Bases: sqlalchemy.orm.interfaces.PropComparator Produce comparison operations for relationship()-based attributes. __eq__(other) Implement the == operator. In a many-to-one context, such as: MyClass.some_prop == <some object> this will typically produce a clause such as: mytable.related_id == <some id> Where <some id> is the primary key of the given object. The == operator provides partial functionality for non- many-to-one comparisons: Comparisons against collections are not supported. Use contains(). Compared to a scalar one-to-many, will produce a clause that compares the target columns in the parent to the given target. Compared to a scalar many-to-many, an alias of the association table will be rendered as well, forming a natural join that is part of the main body of the query. This will not work for queries that go beyond simple AND conjunctions of comparisons, such as those which use OR. Use explicit joins, outerjoins, or has() for more comprehensive non-many-to-one scalar membership tests. Comparisons against None given in a one-to-many or many-to-many context produce a NOT EXISTS clause. __init__(prop, mapper, of_type=None, adapter=None) Construction of RelationshipProperty.Comparator is internal to the ORMs attribute mechanics. __ne__(other) Implement the != operator. In a many-to-one context, such as: MyClass.some_prop != <some object> This will typically produce a clause such as: mytable.related_id != <some id> Where <some id> is the primary key of the given object. The != operator provides partial functionality for non- many-to-one comparisons: Comparisons against collections are not supported. Use contains() in conjunction with not_(). Compared to a scalar one-to-many, will produce a clause that compares the target columns in the parent to the given target. Compared to a scalar many-to-many, an alias of the association table will be rendered as well, forming a natural join that is part of the main body of the query. This will not work for queries that go beyond simple AND conjunctions of comparisons, such as those which use OR. Use
252
explicit joins, outerjoins, or has() in conjunction with not_() for more comprehensive nonmany-to-one scalar membership tests. Comparisons against None given in a one-to-many or many-to-many context produce an EXISTS clause. adapted(adapter) Return a copy of this PropComparator which will use the given adaption function on the local side of generated expressions. any(criterion=None, **kwargs) Produce an expression that tests a collection against particular criterion, using EXISTS. An expression like: session.query(MyClass).filter( MyClass.somereference.any(SomeRelated.x==2) ) Will produce a query like: SELECT * FROM my_table WHERE EXISTS (SELECT 1 FROM related WHERE related.my_id=my_table.id AND related.x=2) Because any() uses a correlated subquery, its performance is not nearly as good when compared against large target tables as that of using a join. any() is particularly useful for testing for empty collections: session.query(MyClass).filter( ~MyClass.somereference.any() ) will produce: SELECT * FROM my_table WHERE NOT EXISTS (SELECT 1 FROM related WHERE related.my_id=my_table.id) any() is only valid for collections, i.e. a relationship() that has uselist=True. For scalar references, use has(). contains(other, **kwargs) Return a simple expression that tests a collection for containment of a particular item. contains() is only valid for a collection, i.e. a relationship() that implements one-to-many or many-to-many with uselist=True. When used in a simple one-to-many context, an expression like: MyClass.contains(other) Produces a clause like: mytable.id == <some id> Where <some id> is the value of the foreign key attribute on other which refers to the primary key of its parent object. From this it follows that contains() is very useful when used with simple one-to-many operations. For many-to-many operations, the behavior of contains() has more caveats. The association table will be rendered in the statement, producing an implicit join, that is, includes multiple tables in the FROM clause which are equated in the WHERE clause: query(MyClass).filter(MyClass.contains(other))
253
Produces a query like: SELECT * FROM my_table, my_association_table AS my_association_table_1 WHERE my_table.id = my_association_table_1.parent_id AND my_association_table_1.child_id = <some id> Where <some id> would be the primary key of other. From the above, it is clear that contains() will not work with many-to-many collections when used in queries that move beyond simple AND conjunctions, such as multiple contains() expressions joined by OR. In such cases subqueries or explicit outer joins will need to be used instead. See any() for a less-performant alternative using EXISTS, or refer to Query.outerjoin() as well as Querying with Joins for more details on constructing outer joins. has(criterion=None, **kwargs) Produce an expression that tests a scalar reference against particular criterion, using EXISTS. An expression like: session.query(MyClass).filter( MyClass.somereference.has(SomeRelated.x==2) ) Will produce a query like: SELECT * FROM my_table WHERE EXISTS (SELECT 1 FROM related WHERE related.id==my_table.related_id AND related.x=2) Because has() uses a correlated subquery, its performance is not nearly as good when compared against large target tables as that of using a join. has() is only valid for scalar references, i.e. a relationship() that has uselist=False. For collection references, use any(). in_(other) Produce an IN clause - this is not implemented for relationship()-based attributes at this time. of_type(cls) Produce a construct that represents a particular subtype of attribute for the parent class. Currently this is usable in conjunction with Query.join() and Query.outerjoin(). RelationshipProperty.mapper Return the targeted Mapper for this RelationshipProperty. This is a lazy-initializing static attribute. RelationshipProperty.table Return the selectable linked to this Deprecated since version 0.7: Use .target RelationshipProperty objects target Mapper. class sqlalchemy.orm.descriptor_props.SynonymProperty(name, map_column=None, descriptor=None, comparator_factory=None, doc=None) Bases: sqlalchemy.orm.descriptor_props.DescriptorProperty class sqlalchemy.orm.query.QueryContext(query) Bases: object
254
CHAPTER
THREE
SQLALCHEMY CORE
3.1 SQL Expression Language Tutorial
3.1.1 Introduction
The SQLAlchemy Expression Language presents a system of representing relational database structures and expressions using Python constructs. These constructs are modeled to resemble those of the underlying database as closely as possible, while providing a modicum of abstraction of the various implementation differences between database backends. While the constructs attempt to represent equivalent concepts between backends with consistent structures, they do not conceal useful concepts that are unique to particular subsets of backends. The Expression Language therefore presents a method of writing backend-neutral SQL expressions, but does not attempt to enforce that expressions are backend-neutral. The Expression Language is in contrast to the Object Relational Mapper, which is a distinct API that builds on top of the Expression Language. Whereas the ORM, introduced in Object Relational Tutorial, presents a high level and abstracted pattern of usage, which itself is an example of applied usage of the Expression Language, the Expression Language presents a system of representing the primitive constructs of the relational database directly without opinion. While there is overlap among the usage patterns of the ORM and the Expression Language, the similarities are more supercial than they may at rst appear. One approaches the structure and content of data from the perspective of a user-dened domain model which is transparently persisted and refreshed from its underlying storage model. The other approaches it from the perspective of literal schema and SQL expression representations which are explicitly composed into messages consumed individually by the database. A successful application may be constructed using the Expression Language exclusively, though the application will need to dene its own system of translating application concepts into individual database messages and from individual database result sets. Alternatively, an application constructed with the ORM may, in advanced scenarios, make occasional usage of the Expression Language directly in certain areas where specic database interactions are required. The following tutorial is in doctest format, meaning each >>> line represents something you can type at a Python command prompt, and the following text represents the expected return value. The tutorial has no prerequisites.
255
3.1.3 Connecting
For this tutorial we will use an in-memory-only SQLite database. This is an easy way to test things without needing to have an actual database dened anywhere. To connect we use create_engine(): >>> from sqlalchemy import create_engine >>> engine = create_engine(sqlite:///:memory:, echo=True) The echo ag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Pythons standard logging module. With it enabled, well see all the generated SQL produced. If you are working through this tutorial and want less output generated, set it to False. This tutorial will format the SQL behind a popup window so it doesnt get in our way; just click the SQL links to see whats being generated.
>>> addresses = Table(addresses, metadata, ... Column(id, Integer, primary_key=True), ... Column(user_id, None, ForeignKey(users.id)), ... Column(email_address, String, nullable=False) ... ) All about how to dene Table objects, as well as how to create them from an existing database automatically, is described in Schema Denition Language. Next, to tell the MetaData wed actually like to create our selection of tables for real inside the SQLite database, we use create_all(), passing it the engine instance which points to our database. This will check for the presence of each table rst before creating, so its safe to call multiple times: >>> metadata.create_all(engine) PRAGMA table_info("users") () PRAGMA table_info("addresses") () CREATE TABLE users ( id INTEGER NOT NULL, name VARCHAR, fullname VARCHAR, PRIMARY KEY (id)
256
) () COMMIT CREATE TABLE addresses ( id INTEGER NOT NULL, user_id INTEGER, email_address VARCHAR NOT NULL, PRIMARY KEY (id), FOREIGN KEY(user_id) REFERENCES users (id) ) () COMMIT Note: Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated without a length; on SQLite and Postgresql, this is a valid datatype, but on others, its not allowed. So if running this tutorial on one of those databases, and you wish to use SQLAlchemy to issue CREATE TABLE, a length may be provided to the String type as below: Column(name, String(50)) The length eld on String, as well as similar precision/scale elds available on Integer, Numeric, etc. are not referenced by SQLAlchemy other than when creating tables. Additionally, Firebird and Oracle require sequences to generate new primary key identiers, and SQLAlchemy doesnt generate or assume these without being instructed. For that, you use the Sequence construct: from sqlalchemy import Sequence Column(id, Integer, Sequence(user_id_seq), primary_key=True) A full, foolproof Table is therefore: users = Table(users, metadata, Column(id, Integer, Sequence(user_id_seq), primary_key=True), Column(name, String(50)), Column(fullname, String(50)), Column(password, String(12)) ) We include this more verbose Table construct separately to highlight the difference between a minimal construct geared primarily towards in-Python usage only, versus one that will be used to emit CREATE TABLE statements on a particular set of backends with more stringent requirements.
Above, while the values method limited the VALUES clause to just two columns, the actual data we placed in values didnt get rendered into the string; instead we got named bind parameters. As it turns out, our data is stored within our Insert construct, but it typically only comes out when the statement is actually executed; since the data consists of literal values, SQLAlchemy automatically generates bind parameters for them. We can peek at this data for now by looking at the compiled form of the statement: >>> ins.compile().params {fullname: Jack Jones, name: jack}
3.1.6 Executing
The interesting part of an Insert is executing it. In this tutorial, we will generally focus on the most explicit method of executing a SQL construct, and later touch upon some shortcut ways to do it. The engine object we created is a repository for database connections capable of issuing SQL to the database. To acquire a connection, we use the connect() method: >>> conn = engine.connect() >>> conn <sqlalchemy.engine.base.Connection object at 0x...> The Connection object represents an actively checked out DBAPI connection resource. Lets feed it our Insert object and see what happens: >>> result = conn.execute(ins) INSERT INTO users (name, fullname) VALUES (?, ?) (jack, Jack Jones) COMMIT So the INSERT statement was now issued to the database. Although we got positional qmark bind parameters instead of named bind parameters in the output. How come ? Because when executed, the Connection used the SQLite dialect to help generate the statement; when we use the str() function, the statement isnt aware of this dialect, and falls back onto a default which uses named parameters. We can view this manually as follows: >>> ins.bind = engine >>> str(ins) INSERT INTO users (name, fullname) VALUES (?, ?) What about the result variable we got when we called execute() ? As the SQLAlchemy Connection object references a DBAPI connection, the result, known as a ResultProxy object, is analogous to the DBAPI cursor object. In the case of an INSERT, we can get important information from it, such as the primary key values which were generated from our statement: >>> result.inserted_primary_key [1] The value of 1 was automatically generated by SQLite, but only because we did not specify the id column in our Insert statement; otherwise, our explicit value would have been used. In either case, SQLAlchemy always knows how to get at a newly generated primary key value, even though the method of generating them is different across different databases; each databases Dialect knows the specic steps needed to determine the correct value (or values; note that inserted_primary_key returns a list so that it supports composite primary keys).
>>> ins = users.insert() >>> conn.execute(ins, id=2, name=wendy, fullname=Wendy Williams) INSERT INTO users (id, name, fullname) VALUES (?, ?, ?) (2, wendy, Wendy Williams) COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...> Above, because we specied all three columns in the the execute() method, the compiled Insert included all three columns. The Insert statement is compiled at execution time based on the parameters we specied; if we specied fewer parameters, the Insert would have fewer entries in its VALUES clause. To issue many inserts using DBAPIs executemany() method, we can send in a list of dictionaries each containing a distinct set of parameters to be inserted, as we do here to add some email addresses: >>> conn.execute(addresses.insert(), [ ... {user_id: 1, email_address : [email protected]}, ... {user_id: 1, email_address : [email protected]}, ... {user_id: 2, email_address : [email protected]}, ... {user_id: 2, email_address : [email protected]}, ... ]) INSERT INTO addresses (user_id, email_address) VALUES (?, ?) ((1, [email protected]), (1, [email protected]), (2, [email protected]), (2, [email protected])) COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...> Above, we again relied upon SQLites automatic generation of primary key identiers for each addresses row. When executing multiple sets of parameters, each dictionary must have the same set of keys; i.e. you cant have fewer keys in some dictionaries than others. This is because the Insert statement is compiled against the rst dictionary in the list, and its assumed that all subsequent argument dictionaries are compatible with that statement.
259
Detailed examples of connectionless and implicit execution are available in the Engines chapter: Connectionless Execution, Implicit Execution.
3.1.9 Selecting
We began with inserts just so that our test database had some data in it. The more interesting part of the data is selecting it ! Well cover UPDATE and DELETE statements later. The primary construct used to generate SELECT statements is the select() function: >>> from sqlalchemy.sql import select >>> s = select([users]) >>> result = conn.execute(s) SELECT users.id, users.name, users.fullname FROM users () Above, we issued a basic select() call, placing the users table within the COLUMNS clause of the select, and then executing. SQLAlchemy expanded the users table into the set of each of its columns, and also generated a FROM clause for us. The result returned is again a ResultProxy object, which acts much like a DBAPI cursor, including methods such as fetchone() and fetchall(). The easiest way to get rows from it is to just iterate: >>> ... (1, (2, (3, (4, for row in result: print row ujack, uJack Jones) uwendy, uWendy Williams) ufred, uFred Flintstone) umary, uMary Contrary)
Above, we see that printing each row produces a simple tuple-like result. We have more options at accessing the data in each row. One very common way is through dictionary access, using the string names of columns: >>> result = conn.execute(s) SELECT users.id, users.name, users.fullname FROM users () >>> row = result.fetchone() >>> print "name:", row[name], "; fullname:", row[fullname] name: jack ; fullname: Jack Jones Integer indexes work as well: >>> row = result.fetchone() >>> print "name:", row[1], "; fullname:", row[2] name: wendy ; fullname: Wendy Williams But another way, whose usefulness will become apparent later on, is to use the Column objects directly as keys: >>> for row in conn.execute(s): ... print "name:", row[users.c.name], "; fullname:", row[users.c.fullname] SELECT users.id, users.name, users.fullname FROM users () name: jack ; fullname: Jack Jones name: wendy ; fullname: Wendy Williams name: fred ; fullname: Fred Flintstone name: mary ; fullname: Mary Contrary
260
Result sets which have pending rows remaining should be explicitly closed before discarding. While the cursor and connection resources referenced by the ResultProxy will be respectively closed and returned to the connection pool when the object is garbage collected, its better to make it explicit as some database APIs are very picky about such things: >>> result.close() If wed like to more carefully control the columns which are placed in the COLUMNS clause of the select, we reference individual Column objects from our Table. These are available as named attributes off the c attribute of the Table object: >>> s = select([users.c.name, users.c.fullname]) >>> result = conn.execute(s) SELECT users.name, users.fullname FROM users () >>> for row in result: ... print row (ujack, uJack Jones) (uwendy, uWendy Williams) (ufred, uFred Flintstone) (umary, uMary Contrary) Lets observe something interesting about the FROM clause. Whereas the generated statement contains two distinct sections, a SELECT columns part and a FROM table part, our select() construct only has a list containing columns. How does this work ? Lets try putting two tables into our select() statement:
>>> for row in conn.execute(select([users, addresses])): ... print row SELECT users.id, users.name, users.fullname, addresses.id, addresses.user_id, addresses.ema FROM users, addresses () (1, ujack, uJack Jones, 1, 1, [email protected]) (1, ujack, uJack Jones, 2, 1, [email protected]) (1, ujack, uJack Jones, 3, 2, [email protected]) (1, ujack, uJack Jones, 4, 2, [email protected]) (2, uwendy, uWendy Williams, 1, 1, [email protected]) (2, uwendy, uWendy Williams, 2, 1, [email protected]) (2, uwendy, uWendy Williams, 3, 2, [email protected]) (2, uwendy, uWendy Williams, 4, 2, [email protected]) (3, ufred, uFred Flintstone, 1, 1, [email protected]) (3, ufred, uFred Flintstone, 2, 1, [email protected]) (3, ufred, uFred Flintstone, 3, 2, [email protected]) (3, ufred, uFred Flintstone, 4, 2, [email protected]) (4, umary, uMary Contrary, 1, 1, [email protected]) (4, umary, uMary Contrary, 2, 1, [email protected]) (4, umary, uMary Contrary, 3, 2, [email protected]) (4, umary, uMary Contrary, 4, 2, [email protected]) It placed both tables into the FROM clause. But also, it made a real mess. Those who are familiar with SQL joins know that this is a Cartesian product; each row from the users table is produced against each row from the addresses table. So to put some sanity into this statement, we need a WHERE clause. Which brings us to the second argument of select(): >>> s = select([users, addresses], users.c.id==addresses.c.user_id) >>> for row in conn.execute(s): ... print row
261
SELECT users.id, users.name, users.fullname, addresses.id, addresses.user_id, addresses.ema FROM users, addresses WHERE users.id = addresses.user_id () (1, ujack, uJack Jones, 1, 1, [email protected]) (1, ujack, uJack Jones, 2, 1, [email protected]) (2, uwendy, uWendy Williams, 3, 2, [email protected]) (2, uwendy, uWendy Williams, 4, 2, [email protected]) So that looks a lot better, we added an expression to our select() which had the effect of adding WHERE users.id = addresses.user_id to our statement, and our results were managed down so that the join of users and addresses rows made sense. But lets look at that expression? Its using just a Python equality operator between two different Column objects. It should be clear that something is up. Saying 1==1 produces True, and 1==2 produces False, not a WHERE clause. So lets see exactly what that expression is doing: >>> users.c.id==addresses.c.user_id <sqlalchemy.sql.expression._BinaryExpression object at 0x...> Wow, surprise ! This is neither a True nor a False. Well what is it ? >>> str(users.c.id==addresses.c.user_id) users.id = addresses.user_id As you can see, the == operator is producing an object that is very much like the Insert and select() objects weve made so far, thanks to Pythons __eq__() builtin; you call str() on it and it produces SQL. By now, one can see that everything we are working with is ultimately the same type of object. SQLAlchemy terms the base class of all of these expressions as sqlalchemy.sql.ClauseElement.
3.1.10 Operators
Since weve stumbled upon SQLAlchemys operator paradigm, lets go through some of its capabilities. Weve seen how to equate two columns to each other: >>> print users.c.id==addresses.c.user_id users.id = addresses.user_id If we use a literal value (a literal meaning, not a SQLAlchemy clause object), we get a bind parameter: >>> print users.c.id==7 users.id = :id_1 The 7 literal is embedded in ClauseElement; we can use the same trick we did with the Insert object to see it: >>> (users.c.id==7).compile().params {uid_1: 7} Most Python operators, as it turns out, produce a SQL expression here, like equals, not equals, etc.: >>> print users.c.id != 7 users.id != :id_1 >>> # None converts to IS NULL >>> print users.c.name == None users.name IS NULL >>> # reverse works too >>> print fred > users.c.name users.name < :name_1 If we add two integer columns together, we get an addition expression: 262 Chapter 3. SQLAlchemy Core
>>> print users.c.id + addresses.c.id users.id + addresses.id Interestingly, the type of the Column is important ! If we use + with two string based columns (recall we put types like Integer and String on our Column objects at the beginning), we get something different: >>> print users.c.name + users.c.fullname users.name || users.fullname Where || is the string concatenation operator used on most databases. But not all of them. MySQL users, fear not: >>> print (users.c.name + users.c.fullname).compile(bind=create_engine(mysql://)) concat(users.name, users.fullname) The above illustrates the SQL thats generated for an Engine thats connected to a MySQL database; the || operator now compiles as MySQLs concat() function. If you have come across an operator which really isnt available, you can always use the op() method; this generates whatever operator you need: >>> print users.c.name.op(tiddlywinks)(foo) users.name tiddlywinks :name_1 This function can also be used to make bitwise operators explicit. For example: somecolumn.op(&)(0xff) is a bitwise AND of the value in somecolumn.
3.1.11 Conjunctions
Wed like to show off some of our operators inside of select() constructs. But we need to lump them together a little more, so lets rst introduce some conjunctions. Conjunctions are those little words like AND and OR that put things together. Well also hit upon NOT. AND, OR and NOT can work from the corresponding functions SQLAlchemy provides (notice we also throw in a LIKE):
>>> from sqlalchemy.sql import and_, or_, not_ >>> print and_(users.c.name.like(j%), users.c.id==addresses.c.user_id, ... or_([email protected], addresses.c.email_address==jack@ya ... not_(users.c.id>5)) users.name LIKE :name_1 AND users.id = addresses.user_id AND (addresses.email_address = :email_address_1 OR addresses.email_address = :email_address_2) AND users.id <= :id_1 And you can also use the re-jiggered bitwise AND, OR and NOT operators, although because of Python operator precedence you have to watch your parenthesis:
>>> print users.c.name.like(j%) & (users.c.id==addresses.c.user_id) & \ ... (([email protected]) | (addresses.c.email_address==jack@y ... & ~(users.c.id>5) users.name LIKE :name_1 AND users.id = addresses.user_id AND (addresses.email_address = :email_address_1 OR addresses.email_address = :email_address_2) AND users.id <= :id_1 So with all of this vocabulary, lets select all users who have an email address at AOL or MSN, whose name starts with a letter between m and z, and well also generate a column containing their full name combined with their email address. We will add two new constructs to this statement, between() and label(). between() produces a BETWEEN clause, and label() is used in a column expression to produce labels using the AS keyword; its recommended when selecting from expressions that otherwise would not have a name:
263
>>> s = select([(users.c.fullname + ", " + addresses.c.email_address).label(title)], ... and_( ... users.c.id==addresses.c.user_id, ... users.c.name.between(m, z), ... or_( ... addresses.c.email_address.like(%@aol.com), ... addresses.c.email_address.like(%@msn.com) ... ) ... ) ... ) >>> print conn.execute(s).fetchall() SELECT users.fullname || ? || addresses.email_address AS title FROM users, addresses WHERE users.id = addresses.user_id AND users.name BETWEEN ? AND ? AND (addresses.email_address LIKE ? OR addresses.email_address LIKE ?) (, , m, z, %@aol.com, %@msn.com) [(uWendy Williams, [email protected],)] Once again, SQLAlchemy gured out the FROM clause for our statement. In fact it will determine the FROM clause based on all of its other bits; the columns clause, the where clause, and also some other elements which we havent covered yet, which include ORDER BY, GROUP BY, and HAVING.
264
... ), ... from_obj=[users, addresses] ... ) >>> print conn.execute(s, x=%@aol.com, y=%@msn.com).fetchall() SELECT users.fullname || , || addresses.email_address AS title FROM users, addresses WHERE users.id = addresses.user_id AND users.name BETWEEN m AND z AND (addresses.email_ (%@aol.com, %@msn.com) [(uWendy Williams, [email protected],)] Going from constructed SQL to text, we lose some capabilities. We lose the capability for SQLAlchemy to compile our expression to a specic target database; above, our expression wont work with MySQL since it has no || construct. It also becomes more tedious for SQLAlchemy to be made aware of the datatypes in use; for example, if our bind parameters required UTF-8 encoding before going in, or conversion from a Python datetime into a string (as is required with SQLite), we would have to add extra information to our text() construct. Similar issues arise on the result set side, where SQLAlchemy also performs type-specic data conversion in some cases; still more information can be added to text() to work around this. But what we really lose from our statement is the ability to manipulate it, transform it, and analyze it. These features are critical when using the ORM, which makes heavy usage of relational transformations. To show off what we mean, well rst introduce the ALIAS construct and the JOIN construct, just so we have some juicier bits to play with.
>>> a1 = addresses.alias() >>> a2 = addresses.alias() >>> s = select([users], and_( ... users.c.id==a1.c.user_id, ... users.c.id==a2.c.user_id, ... [email protected], ... [email protected] ... )) >>> print conn.execute(s).fetchall() SELECT users.id, users.name, users.fullname FROM users, addresses AS addresses_1, addresses AS addresses_2 WHERE users.id = addresses_1.user_id AND users.id = addresses_2.user_id AND addresses_1.ema ([email protected], [email protected]) [(1, ujack, uJack Jones)] Note that the Alias construct generated the names addresses_1 and addresses_2 in the nal SQL result. The generation of these names is determined by the position of the construct within the statement. If we created a query using only the second a2 alias, the name would come out as addresses_1. The generation of the names is also deterministic, meaning the same SQLAlchemy statement construct will produce the identical SQL string each time it is rendered for a particular dialect.
265
Since on the outside, we refer to the alias using the Alias construct itself, we dont need to be concerned about the generated name. However, for the purposes of debugging, it can be specied by passing a string name to the FromClause.alias() method: >>> a1 = addresses.alias(a1) Aliases can of course be used for anything which you can SELECT from, including SELECT statements themselves. We can self-join the users table back to the select() weve created by making an alias of the entire statement. The correlate(None) directive is to avoid SQLAlchemys attempt to correlate the inner users table with the outer one:
>>> a1 = s.correlate(None).alias() >>> s = select([users.c.name], users.c.id==a1.c.id) >>> print conn.execute(s).fetchall() SELECT users.name FROM users, (SELECT users.id AS id, users.name AS name, users.fullname AS fullname FROM users, addresses AS addresses_1, addresses AS addresses_2 WHERE users.id = addresses_1.user_id AND users.id = addresses_2.user_id AND addresses_1.ema WHERE users.id = anon_1.id ([email protected], [email protected]) [(ujack,)]
266
>>> s = select([users.c.fullname], from_obj=[users.outerjoin(addresses)]) >>> print s SELECT users.fullname FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id Thats the output outerjoin() produces, unless, of course, youre stuck in a gig using Oracle prior to version 9, and youve set up your engine (which would be using OracleDialect) to use Oracle-specic SQL: >>> from sqlalchemy.dialects.oracle import dialect as OracleDialect >>> print s.compile(dialect=OracleDialect(use_ansi=False)) SELECT users.fullname FROM users, addresses WHERE users.id = addresses.user_id(+) If you dont know what that SQL means, dont worry ! The secret tribe of Oracle DBAs dont want their black magic being found out ;).
>>> from sqlalchemy.sql import exists >>> query = query.where( ... exists([addresses.c.id], ... and_(addresses.c.user_id==users.c.id, addresses.c.email_address.like(%@msn.com ... ).correlate(users)) And nally, the application also wants to see the listing of email addresses at once; so to save queries, we outerjoin the addresses table (using an outer join so that users with no addresses come back as well; since were programmatic, we might not have kept track that we used an EXISTS clause against the addresses table too...). Additionally, since the users and addresses table both have a column named id, lets isolate their names from each other in the COLUMNS clause by using labels: >>> query = query.column(addresses).select_from(users.outerjoin(addresses)).apply_labels() 3.1. SQL Expression Language Tutorial 267
>>> conn.execute(query).fetchall() SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, ad FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id WHERE users.name = ? AND (EXISTS (SELECT addresses.id FROM addresses WHERE addresses.user_id = users.id AND addresses.email_address LIKE ?)) ORDER BY users.full (jack, %@msn.com) [(1, ujack, uJack Jones, 1, 1, [email protected]), (1, ujack, uJack Jones, 2, 1, u The generative approach is about starting small, adding one thing at a time, to arrive with a full statement. Transforming a Statement Weve seen how methods like Select.where() and _SelectBase.order_by() are part of the so-called Generative family of methods on the select() construct, where one select() copies itself to return a new one with modications. SQL constructs also support another form of generative behavior which is the transformation. This is an advanced technique that most core applications wont use directly; however, it is a system which the ORM relies on heavily, and can be useful for any system that deals with generalized behavior of Core SQL constructs. Using a transformation we can take our users/addresses query and replace all occurrences of addresses with an alias of itself. That is, anywhere that addresses is referred to in the original query, the new query will refer to addresses_1, which is selected as addresses AS addresses_1. The FromClause.replace_selectable() method can achieve this:
>>> a1 = addresses.alias() >>> query = query.replace_selectable(addresses, a1) >>> print query SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, ad FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id WHERE users.name = :name_1 AND (EXISTS (SELECT addresses_1.id FROM addresses AS addresses_1 WHERE addresses_1.user_id = users.id AND addresses_1.email_address LIKE :email_address_1)) For a query such as the above, we can access the columns referred to by the a1 alias in a result set using the Column objects present directly on a1:
>>> for row in conn.execute(query): ... print "Name:", row[users.c.name], "; Email Address", row[a1.c.email_address] SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, ad FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id WHERE users.name = ? AND (EXISTS (SELECT addresses_1.id FROM addresses AS addresses_1 WHERE addresses_1.user_id = users.id AND addresses_1.email_address LIKE ?)) ORDER BY users. (jack, %@msn.com) Name: jack ; Email Address [email protected] Name: jack ; Email Address [email protected]
268
Bind Parameter Objects Throughout all these examples, SQLAlchemy is busy creating bind parameters wherever literal expressions occur. You can also specify your own bind parameters with your own names, and use the same statement repeatedly. The database dialect converts to the appropriate named or positional style, as here where it converts to positional for SQLite: >>> from sqlalchemy.sql import bindparam >>> s = users.select(users.c.name==bindparam(username)) >>> conn.execute(s, username=wendy).fetchall() SELECT users.id, users.name, users.fullname FROM users WHERE users.name = ? (wendy,) [(2, uwendy, uWendy Williams)] Another important aspect of bind parameters is that they may be assigned a type. The type of the bind parameter will determine its behavior within expressions and also how the data bound to it is processed before being sent off to the database: >>> s = users.select(users.c.name.like(bindparam(username, type_=String) + text("%"))) >>> conn.execute(s, username=wendy).fetchall() SELECT users.id, users.name, users.fullname FROM users WHERE users.name LIKE ? || % (wendy,) [(2, uwendy, uWendy Williams)] Bind parameters of the same name can also be used multiple times, where only a single named value is needed in the execute parameters:
>>> s = select([users, addresses], ... users.c.name.like(bindparam(name, type_=String) + text("%")) | ... addresses.c.email_address.like(bindparam(name, type_=String) + text("@%")), ... from_obj=[users.outerjoin(addresses)]) >>> conn.execute(s, name=jack).fetchall() SELECT users.id, users.name, users.fullname, addresses.id, addresses.user_id, addresses.ema FROM users LEFT OUTER JOIN addresses ON users.id = addresses.user_id WHERE users.name LIKE ? || % OR addresses.email_address LIKE ? || @% (jack, jack) [(1, ujack, uJack Jones, 1, 1, [email protected]), (1, ujack, uJack Jones, 2, 1, u Functions SQL functions are created using the func keyword, which generates functions using attribute access: >>> from sqlalchemy.sql import func >>> print func.now() now() >>> print func.concat(x, y) concat(:param_1, :param_2) By generates, we mean that any SQL function is created based on the word you choose: >>> print func.xyz_my_goofy_function() xyz_my_goofy_function()
269
Certain function names are known by SQLAlchemy, allowing special behavioral rules to be applied. Some for example are ANSI functions, which mean they dont get the parenthesis added after them, such as CURRENT_TIMESTAMP: >>> print func.current_timestamp() CURRENT_TIMESTAMP Functions are most typically used in the columns clause of a select statement, and can also be labeled as well as given a type. Labeling a function is recommended so that the result can be targeted in a result row based on a string name, and assigning it a type is required when you need result-set processing to occur, such as for Unicode conversion and date conversions. Below, we use the result function scalar() to just read the rst column of the rst row and then close the result; the label, even though present, is not important in this case: >>> print conn.execute( ... select([func.max(addresses.c.email_address, type_=String).label(maxemail)]) ... ).scalar() SELECT max(addresses.email_address) AS maxemail FROM addresses () [email protected] Databases such as PostgreSQL and Oracle which support functions that return whole result sets can be assembled into selectable units, which can be used in statements. Such as, a database function calculate() which takes the parameters x and y, and returns three columns which wed like to name q, z and r, we can construct using lexical column objects as well as bind parameters: >>> from sqlalchemy.sql import column >>> calculate = select([column(q), column(z), column(r)], ... from_obj=[func.calculate(bindparam(x), bindparam(y))]) >>> print select([users], users.c.id > calculate.c.z) SELECT users.id, users.name, users.fullname FROM users, (SELECT q, z, r FROM calculate(:x, :y)) WHERE users.id > z If we wanted to use our calculate statement twice with different bind parameters, the unique_params() function will create copies for us, and mark the bind parameters as unique so that conicting names are isolated. Note we also make two separate aliases of our selectable: >>> s = select([users], users.c.id.between( ... calculate.alias(c1).unique_params(x=17, y=45).c.z, ... calculate.alias(c2).unique_params(x=5, y=12).c.z)) >>> print s SELECT users.id, users.name, users.fullname FROM users, (SELECT q, z, r FROM calculate(:x_1, :y_1)) AS c1, (SELECT q, z, r FROM calculate(:x_2, :y_2)) AS c2 WHERE users.id BETWEEN c1.z AND c2.z >>> s.compile().params {ux_2: 5, uy_2: 12, uy_1: 45, ux_1: 17} See also sqlalchemy.sql.expression.func.
270
Window Functions Any FunctionElement, including functions generated by func, can be turned into a window function, that is an OVER clause, using the over() method: >>> s = select([users.c.id, func.row_number().over(order_by=users.c.name)]) >>> print s SELECT users.id, row_number() OVER (ORDER BY users.name) AS anon_1 FROM users Unions and Other Set Operations Unions come in two avors, UNION and UNION ALL, which are available via module level functions: >>> from sqlalchemy.sql import union >>> u = union( ... addresses.select([email protected]), ... addresses.select(addresses.c.email_address.like(%@yahoo.com)), ... ).order_by(addresses.c.email_address)
>>> print conn.execute(u).fetchall() SELECT addresses.id, addresses.user_id, addresses.email_address FROM addresses WHERE addresses.email_address = ? UNION SELECT addresses.id, addresses.user_id, addresses.e FROM addresses WHERE addresses.email_address LIKE ? ORDER BY addresses.email_address ([email protected], %@yahoo.com) [(1, 1, [email protected])] Also available, though not supported on all databases, are intersect(), intersect_all(), except_(), and except_all(): >>> from sqlalchemy.sql import except_ >>> u = except_( ... addresses.select(addresses.c.email_address.like(%@%.com)), ... addresses.select(addresses.c.email_address.like(%@msn.com)) ... )
>>> print conn.execute(u).fetchall() SELECT addresses.id, addresses.user_id, addresses.email_address FROM addresses WHERE addresses.email_address LIKE ? EXCEPT SELECT addresses.id, addresses.user_id, address FROM addresses WHERE addresses.email_address LIKE ? (%@%.com, %@msn.com) [(1, 1, [email protected]), (4, 2, [email protected])] A common issue with so-called compound selectables arises due to the fact that they nest with parenthesis. SQLite in particular doesnt like a statement that starts with parenthesis. So when nesting a compound inside a compound, its often necessary to apply .alias().select() to the rst element of the outermost compound, if that element is also a compound. For example, to nest a union and a select inside of except_, SQLite will want the union to be stated as a subquery: >>> u = except_( ... union( ... addresses.select(addresses.c.email_address.like(%@yahoo.com)),
271
... addresses.select(addresses.c.email_address.like(%@msn.com)) ... ).alias().select(), # apply subquery here ... addresses.select(addresses.c.email_address.like(%@msn.com)) ... ) >>> print conn.execute(u).fetchall() SELECT anon_1.id, anon_1.user_id, anon_1.email_address FROM (SELECT addresses.id AS id, addresses.user_id AS user_id, addresses.email_address AS email_address FROM addresses WHERE addresses.email_address LIKE ? UNION SELECT addresses.id AS id, addresses.user_id AS user_id, addresses.email_address AS email_address FROM addresses WHERE addresses.email_address LIKE ?) AS anon_1 EXCEPT SELECT addresses.id, addresses.user_id, addresses.email_address FROM addresses WHERE addresses.email_address LIKE ? (%@yahoo.com, %@msn.com, %@msn.com) [(1, 1, [email protected])] Scalar Selects To embed a SELECT in a column expression, use as_scalar():
>>> print conn.execute(select([ ... users.c.name, ... select([func.count(addresses.c.id)], users.c.id==addresses.c.user_id).as_scalar() ... ])).fetchall() SELECT users.name, (SELECT count(addresses.id) AS count_1 FROM addresses WHERE users.id = addresses.user_id) AS anon_1 FROM users () [(ujack, 2), (uwendy, 2), (ufred, 0), (umary, 0)] Alternatively, applying a label() to a select evaluates it as a scalar as well:
>>> print conn.execute(select([ ... users.c.name, ... select([func.count(addresses.c.id)], users.c.id==addresses.c.user_id).label(addr ... ])).fetchall() SELECT users.name, (SELECT count(addresses.id) AS count_1 FROM addresses WHERE users.id = addresses.user_id) AS address_count FROM users () [(ujack, 2), (uwendy, 2), (ufred, 0), (umary, 0)] Correlated Subqueries Notice in the examples on scalar selects, the FROM clause of each embedded select did not contain the users table in its FROM clause. This is because SQLAlchemy automatically attempts to correlate embedded FROM objects to that of an enclosing query. To disable this, or to specify explicit FROM clauses to be correlated, use correlate(): >>> s = select([users.c.name], users.c.id==select([users.c.id]).correlate(None)) >>> print s SELECT users.name FROM users 272 Chapter 3. SQLAlchemy Core
WHERE users.id = (SELECT users.id FROM users) >>> s = select([users.c.name, addresses.c.email_address], users.c.id== ... select([users.c.id], users.c.id==addresses.c.user_id).correlate(addresses) ... ) >>> print s SELECT users.name, addresses.email_address FROM users, addresses WHERE users.id = (SELECT users.id FROM users WHERE users.id = addresses.user_id) Ordering, Grouping, Limiting, Offset...ing... The select() function can take keyword arguments order_by, group_by (as well as having), limit, and offset. Theres also distinct=True. These are all also available as generative functions. order_by() expressions can use the modiers asc() or desc() to indicate ascending or descending. >>> s = select([addresses.c.user_id, func.count(addresses.c.id)]).\ ... group_by(addresses.c.user_id).having(func.count(addresses.c.id)>1) >>> print conn.execute(s).fetchall() SELECT addresses.user_id, count(addresses.id) AS count_1 FROM addresses GROUP BY addresses.user_id HAVING count(addresses.id) > ? (1,) [(1, 2), (2, 2)] >>> s = select([addresses.c.email_address, addresses.c.id]).distinct().\ ... order_by(addresses.c.email_address.desc(), addresses.c.id) >>> conn.execute(s).fetchall() SELECT DISTINCT addresses.email_address, addresses.id FROM addresses ORDER BY addresses.email_address DESC, addresses.id () [([email protected], 3), ([email protected], 4), ([email protected], 1), ([email protected], 2)] >>> s = select([addresses]).offset(1).limit(1) >>> print conn.execute(s).fetchall() SELECT addresses.id, addresses.user_id, addresses.email_address FROM addresses LIMIT 1 OFFSET 1 () [(2, 1, [email protected])]
273
# insert from a concatenation expression addresses.insert().values(email_address = name + @ + host) values() can be mixed with per-execution values: conn.execute( users.insert().values(name=func.upper(jack)), fullname=Jack Jones ) bindparam() constructs can be passed, however the names of the tables columns are reserved for the automatic generation of bind names: users.insert().values(id=bindparam(_id), name=bindaparam(_name)) # insert many rows at once: conn.execute( users.insert().values(id=bindparam(_id), name=bindaparam(_name)), [ {_id:1, _name:name1}, {_id:2, _name:name2}, {_id:3, _name:name3}, ] ) Updates work a lot like INSERTS, except there is an additional WHERE clause that can be specied: >>> # change jack to ed >>> conn.execute(users.update(). ... where(users.c.name==jack). ... values(name=ed) ... ) UPDATE users SET name=? WHERE users.name = ? (ed, jack) COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...> >>> # use bind parameters >>> u = users.update().\ ... where(users.c.name==bindparam(oldname)).\ ... values(name=bindparam(newname)) >>> conn.execute(u, oldname=jack, newname=ed) UPDATE users SET name=? WHERE users.name = ? (ed, jack) COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...> >>> # with binds, you can also update many rows at once >>> conn.execute(u, ... {oldname:jack, newname:ed}, ... {oldname:wendy, newname:mary}, ... {oldname:jim, newname:jake}, ... ) UPDATE users SET name=? WHERE users.name = ? [(ed, jack), (mary, wendy), (jake, jim)] COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...>
274
>>> # update a column to an expression.: >>> conn.execute(users.update(). ... values(fullname="Fullname: " + users.c.name) ... ) UPDATE users SET fullname=(? || users.name) (Fullname: ,) COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...> Correlated Updates A correlated update lets you update a table using selection from another table, or the same table: >>> s = select([addresses.c.email_address], addresses.c.user_id==users.c.id).limit(1) >>> conn.execute(users.update().values(fullname=s)) UPDATE users SET fullname=(SELECT addresses.email_address FROM addresses WHERE addresses.user_id = users.id LIMIT 1 OFFSET 0) () COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...>
3.1.18 Deletes
Finally, a delete. Easy enough: >>> conn.execute(addresses.delete()) DELETE FROM addresses () COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...> >>> conn.execute(users.delete().where(users.c.name > m)) DELETE FROM users WHERE users.name > ? (m,) COMMIT <sqlalchemy.engine.base.ResultProxy object at 0x...>
275
3.2.1 Functions
The expression package uses functions to construct SQL expressions. The return value of each function is an object instance which is a subclass of ClauseElement. sqlalchemy.sql.expression.alias(selectable, name=None) Return an Alias object. An Alias represents any FromClause with an alternate name assigned within SQL, typically using the AS clause when generated, e.g. SELECT * FROM table AS aliasname. Similar functionality is available via the alias() method available on all FromClause subclasses. When an Alias is created from a Table object, this has the effect of the table being rendered as tablename AS aliasname in a SELECT statement. For select() objects, the effect is that of creating a named subquery, i.e. aliasname. (select ...) AS
The name parameter is optional, and provides the name to use in the rendered SQL. If blank, an anonymous name will be deterministically generated at compile time. Deterministic means the name is guaranteed to be unique against other constructs used in the same statement, and will also be the same name for each successive compilation of the same statement object. Parameters selectable any FromClause subclass, such as a table, select statement, etc. name string name to be assigned as the alias. If None, a name will be deterministically generated at compile time. sqlalchemy.sql.expression.and_(*clauses) Join a list of clauses together using the AND operator. The & operator is also overloaded on all _CompareMixin subclasses to produce the same result. sqlalchemy.sql.expression.asc(column) Return an ascending ORDER BY clause element. e.g.: someselect.order_by(asc(table1.mycol)) produces: ORDER BY mycol ASC sqlalchemy.sql.expression.between(ctest, cleft, cright) Return a BETWEEN predicate clause. Equivalent of SQL clausetest BETWEEN clauseleft AND clauseright. The between() method on all _CompareMixin subclasses provides similar functionality.
276
sqlalchemy.sql.expression.bindparam(key, value=None, type_=None, quired=False, callable_=None) Create a bind parameter clause with the given key. Parameters
unique=False,
re-
key the key for this bind param. Will be used in the generated SQL statement for dialects that use named parameters. This value may be modied when part of a compilation operation, if other _BindParamClause objects exist with the same key, or if its length is too long and truncation is required. value Initial value for this bind param. This value may be overridden by the dictionary of parameters sent to statement compilation/execution. callable_ A callable function that takes the place of value. The function will be called at statement execution time to determine the ultimate value. Used for scenarios where the actual bind value cannot be determined at the point at which the clause construct is created, but embedded bind values are still desirable. type_ A TypeEngine object that will be used to pre-process the value corresponding to this _BindParamClause at execution time. unique if True, the key name of this BindParamClause will be modied if another _BindParamClause of the same name already has been located within the containing ClauseElement. required a value is required at execution time. sqlalchemy.sql.expression.case(whens, value=None, else_=None) Produce a CASE statement. whens A sequence of pairs, or alternatively a dict, to be translated into WHEN / THEN clauses. value Optional for simple case statements, produces a column expression as in CASE <expr> WHEN ... else_ Optional as well, for case defaults produces the ELSE portion of the CASE statement. The expressions used for THEN and ELSE, when specied as strings, will be interpreted as bound values. To specify textual SQL expressions for these, use the literal_column() construct. The expressions used for the WHEN criterion may only be literal strings when value is present, i.e. CASE table.somecol WHEN x THEN y. Otherwise, literal strings are not accepted in this position, and either the text(<string>) or literal(<string>) constructs must be used to interpret raw string values. Usage examples: case([(orderline.c.qty > 100, item.c.specialprice), (orderline.c.qty > 10, item.c.bulkprice) ], else_=item.c.regularprice) case(value=emp.c.type, whens={ engineer: emp.c.salary * 1.1, manager: emp.c.salary * 3, }) Using literal_column(), to allow for databases that do not support bind parameters in the then clause. The type can be specied which determines the type of the case() construct overall: case([(orderline.c.qty > 100, literal_column("greaterthan100", String)), (orderline.c.qty > 10, literal_column("greaterthan10",
277
String)) ], else_=literal_column("lethan10", String)) sqlalchemy.sql.expression.cast(clause, totype, **kwargs) Return a CAST function. Equivalent of SQL CAST(clause AS totype). Use with a TypeEngine subclass, i.e: cast(table.c.unit_price * table.c.qty, Numeric(10,4)) or: cast(table.c.timestamp, DATE) sqlalchemy.sql.expression.column(text, type_=None) Return a textual column clause, as would be in the columns clause of a SELECT statement. The object returned is an instance of ColumnClause, which represents the syntactical portion of the schema-level Column object. It is often used directly within select() constructs or with lightweight table() constructs. Note that the column() function is not part of the sqlalchemy namespace. It must be imported from the sql package: from sqlalchemy.sql import table, column Parameters text the name of the column. Quoting rules will be applied to the clause like any other column name. For textual column constructs that are not to be quoted, use the literal_column() function. type_ an optional TypeEngine object which will provide result-set translation for this column. See ColumnClause for further examples. sqlalchemy.sql.expression.collate(expression, collation) Return the clause expression COLLATE collation. e.g.: collate(mycolumn, utf8_bin) produces: mycolumn COLLATE utf8_bin sqlalchemy.sql.expression.delete(table, whereclause=None, **kwargs) Return a Delete clause element. Similar functionality is available via the delete() method on Table. Parameters table The table to be updated.
278
whereclause A ClauseElement describing the WHERE condition of the UPDATE statement. Note that the where() generative method may be used instead. sqlalchemy.sql.expression.desc(column) Return a descending ORDER BY clause element. e.g.: someselect.order_by(desc(table1.mycol)) produces: ORDER BY mycol DESC sqlalchemy.sql.expression.distinct(expr) Return a DISTINCT clause. e.g.: distinct(a) renders: DISTINCT a sqlalchemy.sql.expression.except_(*selects, **kwargs) Return an EXCEPT of multiple selectables. The returned object is an instance of CompoundSelect. *selects a list of Select instances. **kwargs available keyword arguments are the same as those of select(). sqlalchemy.sql.expression.except_all(*selects, **kwargs) Return an EXCEPT ALL of multiple selectables. The returned object is an instance of CompoundSelect. *selects a list of Select instances. **kwargs available keyword arguments are the same as those of select(). sqlalchemy.sql.expression.exists(*args, **kwargs) Return an EXISTS clause as applied to a Select object. Calling styles are of the following forms: # use on an existing select() s = select([table.c.col1]).where(table.c.col2==5) s = exists(s) # construct a select() at once exists([*], **select_arguments).where(criterion) # columns argument is optional, generates "EXISTS (SELECT *)" # by default. exists().where(table.c.col2==5)
279
sqlalchemy.sql.expression.extract(eld, expr) Return the clause extract(field FROM expr). sqlalchemy.sql.expression.false() Return a _False object, which compiles to false, or the boolean equivalent for the target dialect. sqlalchemy.sql.expression.func Generate SQL function expressions. func is a special object instance which generates SQL functions based on name-based attributes, e.g.: >>> print func.count(1) count(:param_1) The element is a column-oriented SQL element like any other, and is used in that way: >>> print select([func.count(table.c.id)]) SELECT count(sometable.id) FROM sometable Any name can be given to func. If the function name is unknown to SQLAlchemy, it will be rendered exactly as is. For common SQL functions which SQLAlchemy is aware of, the name may be interpreted as a generic function which will be compiled appropriately to the target database: >>> print func.current_timestamp() CURRENT_TIMESTAMP To call functions which are present in dot-separated packages, specify them in the same manner: >>> print func.stats.yield_curve(5, 10) stats.yield_curve(:yield_curve_1, :yield_curve_2) SQLAlchemy can be made aware of the return type of functions to enable type-specic lexical and result-based behavior. For example, to ensure that a string-based function returns a Unicode value and is similarly treated as a string in expressions, specify Unicode as the type: >>> print func.my_string(uhi, type_=Unicode) + + \ ... func.my_string(uthere, type_=Unicode) my_string(:my_string_1) || :my_string_2 || my_string(:my_string_3) The object returned by a func call is an instance of Function. This object meets the column interface, including comparison and labeling functions. The object can also be passed the execute() method of a Connection or Engine, where it will be wrapped inside of a SELECT statement rst: print connection.execute(func.current_timestamp()).scalar() A function can also be bound to a Engine or Connection using the bind keyword argument, providing an execute() as well as a scalar() method: myfunc = func.current_timestamp(bind=some_engine) print myfunc.scalar() Functions which are interpreted as generic functions know how to calculate their return type automatically. For a listing of known generic functions, see Generic Functions.
280
sqlalchemy.sql.expression.insert(table, values=None, inline=False, **kwargs) Return an Insert clause element. Similar functionality is available via the insert() method on Table. See also: Insert Expressions - Core Tutorial description of the insert() construct. Parameters table The table to be inserted into. values A dictionary which species the column specications of the INSERT, and is optional. If left as None, the column specications are determined from the bind parameters used during the compile phase of the INSERT statement. If the bind parameters also are None during the compile phase, then the column specications will be generated from the full list of table columns. Note that the values() generative method may also be used for this. prexes A list of modier keywords to be inserted between INSERT and INTO. Alternatively, the prefix_with() generative method may be used. inline if True, SQL defaults will be compiled inline into the statement and not preexecuted. If both values and compile-time bind parameters are present, the compile-time bind parameters override the information specied within values on a per-key basis. The keys within values can be either Column objects or their string identiers. Each key may reference one of: a literal data value (i.e. string, number, etc.); a Column object; a SELECT statement. If a SELECT statement is specied which references this INSERT statements table, the statement will be correlated against the INSERT statement. sqlalchemy.sql.expression.intersect(*selects, **kwargs) Return an INTERSECT of multiple selectables. The returned object is an instance of CompoundSelect. *selects a list of Select instances. **kwargs available keyword arguments are the same as those of select(). sqlalchemy.sql.expression.intersect_all(*selects, **kwargs) Return an INTERSECT ALL of multiple selectables. The returned object is an instance of CompoundSelect. *selects a list of Select instances. **kwargs available keyword arguments are the same as those of select(). sqlalchemy.sql.expression.join(left, right, onclause=None, isouter=False) Return a JOIN clause element (regular inner join). The returned object is an instance of Join. Similar functionality is also available via the join() method on any FromClause. Parameters
281
left The left side of the join. right The right side of the join. onclause Optional criterion for the ON clause, is derived from foreign key relationships established between left and right otherwise. To chain joins together, use the FromClause.join() or FromClause.outerjoin() methods on the resulting Join object. sqlalchemy.sql.expression.label(name, obj) Return a _Label object for the given ColumnElement. A label changes the name of an element in the columns clause of a SELECT statement, typically via the AS SQL keyword. This functionality is more conveniently available via the label() method on ColumnElement. name label name obj a ColumnElement. sqlalchemy.sql.expression.literal(value, type_=None) Return a literal clause, bound to a bind parameter. Literal clauses are created automatically when non- ClauseElement objects (such as strings, ints, dates, etc.) are used in a comparison operation with a _CompareMixin subclass, such as a Column object. Use this function to force the generation of a literal clause, which will be created as a _BindParamClause with a bound value. Parameters value the value to be bound. Can be any Python object supported by the underlying DB-API, or is translatable via the given type argument. type_ an optional TypeEngine which will provide bind-parameter translation for this literal. sqlalchemy.sql.expression.literal_column(text, type_=None) Return a textual column expression, as would be in the columns clause of a SELECT statement. The object returned supports further expressions in the same way as any other column object, including comparison, math and string operations. The type_ parameter is important to determine proper expression behavior (such as, + means string concatenation or numerical addition based on the type). Parameters text the text of the expression; can be any SQL expression. Quoting rules will not be applied. To specify a column-name expression which should be subject to quoting rules, use the column() function. type_ an optional TypeEngine object which will provide result-set translation and additional expression semantics for this column. If left as None the type will be NullType. sqlalchemy.sql.expression.not_(clause) Return a negation of the given clause, i.e. NOT(clause). The ~ operator is also overloaded on all _CompareMixin subclasses to produce the same result. sqlalchemy.sql.expression.null() Return a _Null object, which compiles to NULL. sqlalchemy.sql.expression.nullsfirst(column) Return a NULLS FIRST ORDER BY clause element.
282
e.g.: someselect.order_by(desc(table1.mycol).nullsfirst()) produces: ORDER BY mycol DESC NULLS FIRST sqlalchemy.sql.expression.nullslast(column) Return a NULLS LAST ORDER BY clause element. e.g.: someselect.order_by(desc(table1.mycol).nullslast()) produces: ORDER BY mycol DESC NULLS LAST sqlalchemy.sql.expression.or_(*clauses) Join a list of clauses together using the OR operator. The | operator is also overloaded on all _CompareMixin subclasses to produce the same result. sqlalchemy.sql.expression.outparam(key, type_=None) Create an OUT parameter for usage in functions (stored procedures), for databases which support them. The outparam can be used like a regular function parameter. The output value will be available from the ResultProxy object via its out_parameters attribute, which returns a dictionary containing the values. sqlalchemy.sql.expression.outerjoin(left, right, onclause=None) Return an OUTER JOIN clause element. The returned object is an instance of Join. Similar functionality is also available via the outerjoin() method on any FromClause. Parameters left The left side of the join. right The right side of the join. onclause Optional criterion for the ON clause, is derived from foreign key relationships established between left and right otherwise. To chain joins together, use the FromClause.join() or FromClause.outerjoin() methods on the resulting Join object. sqlalchemy.sql.expression.over(func, partition_by=None, order_by=None) Produce an OVER clause against a function. Used against aggregate or so-called window functions, for database backends that support window functions. E.g.: from sqlalchemy import over over(func.row_number(), order_by=x) Would produce ROW_NUMBER() OVER(ORDER BY x). Parameters
283
func a FunctionElement construct, typically generated by func. partition_by a column element or string, or a list of such, that will be used as the PARTITION BY clause of the OVER construct. order_by a column element or string, or a list of such, that will be used as the ORDER BY clause of the OVER construct. This function is also available from the func construct itself via the FunctionElement.over() method. New in 0.7. sqlalchemy.sql.expression.select(columns=None, **kwargs) Returns a SELECT clause element. whereclause=None, from_obj=[],
Similar functionality is also available via the select() method on any FromClause. The returned object is an instance of Select. All arguments which accept ClauseElement arguments also accept string arguments, which will be converted as appropriate into either text() or literal_column() constructs. See also: Selecting - Core Tutorial description of select(). Parameters columns A list of ClauseElement objects, typically ColumnElement objects or subclasses, which will form the columns clause of the resulting statement. For all members which are instances of Selectable, the individual ColumnElement members of the Selectable will be added individually to the columns clause. For example, specifying a Table instance will result in all the contained Column objects within to be added to the columns clause. This argument is not present on the form of select() available on Table. whereclause A ClauseElement expression which will be used to form the WHERE clause. from_obj A list of ClauseElement objects which will be added to the FROM clause of the resulting statement. Note that from objects are automatically located within the columns and whereclause ClauseElements. Use this parameter to explicitly specify from objects which are not automatically locatable. This could include Table objects that arent otherwise present, or Join objects whose presence will supercede that of the Table objects already located in the other clauses. autocommit Deprecated. Use .execution_options(autocommit=<True|False>) to set the autocommit option. bind=None an Engine or Connection instance to which the resulting Select object will be bound. The Select object will otherwise automatically bind to whatever Connectable instances can be located within its contained ClauseElement members. correlate=True indicates that this Select object should have its contained FromClause elements correlated to an enclosing Select object. This means that any ClauseElement instance within the froms collection of this Select which is also present in the froms collection of an enclosing select will not be rendered in the FROM clause of this select statement. distinct=False when True, applies a DISTINCT qualier to the columns clause of the resulting statement.
284
The boolean argument may also be a column expression or list of column expressions this is a special calling form which is understood by the Postgresql dialect to render the DISTINCT ON (<columns>) syntax. distinct is also available via the distinct() generative method. Note: The distinct keywords acceptance of a string argument for usage with MySQL is deprecated. Use the prefixes argument or prefix_with(). for_update=False when True, applies FOR UPDATE to the end of the resulting statement. Certain database dialects also support alternate values for this parameter, for example mysql supports read which translates to LOCK IN SHARE MODE, and oracle supports nowait which translates to FOR UPDATE NOWAIT. group_by a list of ClauseElement objects which will comprise the GROUP BY clause of the resulting select. having a ClauseElement that will comprise the HAVING clause of the resulting select when GROUP BY is used. limit=None a numerical value which usually compiles to a LIMIT expression in the resulting select. Databases that dont support LIMIT will attempt to provide similar functionality. offset=None a numeric value which usually compiles to an OFFSET expression in the resulting select. Databases that dont support OFFSET will attempt to provide similar functionality. order_by a scalar or list of ClauseElement objects which will comprise the ORDER BY clause of the resulting select. prexes a list of strings or ClauseElement objects to include directly after the SELECT keyword in the generated statement, for dialect-specic query features. prefixes is also available via the prefix_with() generative method. use_labels=False when True, the statement will be generated using labels for each column in the columns clause, which qualify each column with its parent tables (or aliases) name so that name conicts between columns in different tables dont occur. The format of the label is <tablename>_<column>. The c collection of the resulting Select object will use these names as well for targeting column members. use_labels is also available via the apply_labels() generative method. sqlalchemy.sql.expression.subquery(alias, *args, **kwargs) Return an Alias object derived from a Select. name alias name *args, **kwargs all other arguments are delivered to the select() function. sqlalchemy.sql.expression.table(name, *columns) Represent a textual table clause. The object returned is an instance of TableClause, which represents the syntactical portion of the schemalevel Table object. It may be used to construct lightweight table constructs. Note that the table() function is not part of the sqlalchemy namespace. It must be imported from the sql package: from sqlalchemy.sql import table, column
285
Parameters name Name of the table. columns A collection of column() constructs. See TableClause for further examples. sqlalchemy.sql.expression.text(text, bind=None, *args, **kwargs) Create a SQL construct that is represented by a literal string. E.g.: t = text("SELECT * FROM users") result = connection.execute(t) The advantages text() provides over a plain string are backend-neutral support for bind parameters, per-statement execution options, as well as bind parameter and result-column typing behavior, allowing SQLAlchemy type constructs to play a role when executing a statement that is specied literally. Bind parameters are specied by name, using the format :name. E.g.: t = text("SELECT * FROM users WHERE id=:user_id") result = connection.execute(t, user_id=12) To invoke SQLAlchemy typing logic for bind parameters, the bindparams list allows specication of bindparam() constructs which specify the type for a given name: t = text("SELECT id FROM users WHERE updated_at>:updated", bindparams=[bindparam(updated, DateTime())] ) Typing during result row processing is also an important concern. Result column types are specied using the typemap dictionary, where the keys match the names of columns. These names are taken from what the DBAPI returns as cursor.description: t = text("SELECT id, name FROM users", typemap={ id:Integer, name:Unicode } ) The text() construct is used internally for most cases when a literal string is specied for part of a larger query, such as within select(), update(), insert() or delete(). In those cases, the same bind parameter syntax is applied: s = select([users.c.id, users.c.name]).where("id=:user_id") result = connection.execute(s, user_id=12) Using text() explicitly usually implies the construction of a full, standalone statement. As such, SQLAlchemy refers to it as an Executable object, and it supports the Executable.execution_options() method. For example, a text() construct that should be subject to autocommit can be set explicitly so using the autocommit option:
286
t = text("EXEC my_procedural_thing()").\ execution_options(autocommit=True) Note that SQLAlchemys usual autocommit behavior applies to text() constructs - that is, statements which begin with a phrase such as INSERT, UPDATE, DELETE, or a variety of other phrases specic to certain backends, will be eligible for autocommit if no transaction is in progress. Parameters text the text of the SQL statement to be created. use :<param> to specify bind parameters; they will be compiled to their engine-specic format. autocommit Deprecated. Use .execution_options(autocommit=<True|False>) to set the autocommit option. bind an optional connection or engine to be used for this text query. bindparams a list of bindparam() instances which can be used to dene the types and/or initial values for the bind parameters within the textual statement; the keynames of the bindparams must match those within the text of the statement. The types will be used for pre-processing on bind values. typemap a dictionary mapping the names of columns represented in the columns clause of a SELECT statement to type objects, which will be used to perform post-processing on columns within the result set. This argument applies to any expression that returns result sets. sqlalchemy.sql.expression.true() Return a _True object, which compiles to true, or the boolean equivalent for the target dialect. sqlalchemy.sql.expression.tuple_(*expr) Return a SQL tuple. Main usage is to produce a composite IN construct: tuple_(table.c.col1, table.c.col2).in_( [(1, 2), (5, 12), (10, 19)] ) sqlalchemy.sql.expression.type_coerce(expr, type_) Coerce the given expression into the given type, on the Python side only. type_coerce() is roughly similar to :func:.cast, except no CAST expression is rendered - the given type is only applied towards expression typing and against received result values. e.g.: from sqlalchemy.types import TypeDecorator import uuid class AsGuid(TypeDecorator): impl = String def process_bind_param(self, value, dialect): if value is not None: return str(value) else: return None
287
def process_result_value(self, value, dialect): if value is not None: return uuid.UUID(value) else: return None conn.execute( select([type_coerce(mytable.c.ident, AsGuid)]).\ where( type_coerce(mytable.c.ident, AsGuid) == uuid.uuid3(uuid.NAMESPACE_URL, bar) ) ) sqlalchemy.sql.expression.union(*selects, **kwargs) Return a UNION of multiple selectables. The returned object is an instance of CompoundSelect. A similar union() method is available on all FromClause subclasses. *selects a list of Select instances. **kwargs available keyword arguments are the same as those of select(). sqlalchemy.sql.expression.union_all(*selects, **kwargs) Return a UNION ALL of multiple selectables. The returned object is an instance of CompoundSelect. A similar union_all() method is available on all FromClause subclasses. *selects a list of Select instances. **kwargs available keyword arguments are the same as those of select(). sqlalchemy.sql.expression.update(table, whereclause=None, **kwargs) Return an Update clause element. Similar functionality is available via the update() method on Table. Parameters table The table to be updated. whereclause A ClauseElement describing the WHERE condition of the UPDATE statement. Note that the where() generative method may also be used for this. values A dictionary which species the SET conditions of the UPDATE, and is optional. If left as None, the SET conditions are determined from the bind parameters used during the compile phase of the UPDATE statement. If the bind parameters also are None during the compile phase, then the SET conditions will be generated from the full list of table columns. Note that the values() generative method may also be used for this. inline if True, SQL defaults will be compiled inline into the statement and not preexecuted. If both values and compile-time bind parameters are present, the compile-time bind parameters override the information specied within values on a per-key basis. The keys within values can be either Column objects or their string identiers. Each key may reference one of: a literal data value (i.e. string, number, etc.); 288 Chapter 3. SQLAlchemy Core values=None, inline=False,
a Column object; a SELECT statement. If a SELECT statement is specied which references this UPDATE statements table, the statement will be correlated against the UPDATE statement.
3.2.2 Classes
class sqlalchemy.sql.expression.Alias(selectable, name=None) Bases: sqlalchemy.sql.expression.FromClause Represents an table or selectable alias (AS). Represents an alias, as typically applied to any table or sub-select within a SQL statement using the AS keyword (or without the keyword on certain databases such as Oracle). This object is constructed from the alias() module level function as well as the FromClause.alias() method available on all FromClause subclasses. class sqlalchemy.sql.expression._BindParamClause(key, value, type_=None, unique=False, callable_=None, isoutparam=False, required=False, _compared_to_operator=None, _compared_to_type=None) Bases: sqlalchemy.sql.expression.ColumnElement Represent a bind parameter. Public constructor is the bindparam() function. __init__(key, value, type_=None, unique=False, callable_=None, isoutparam=False, quired=False, _compared_to_operator=None, _compared_to_type=None) Construct a _BindParamClause. Parameters key the key for this bind param. Will be used in the generated SQL statement for dialects that use named parameters. This value may be modied when part of a compilation operation, if other _BindParamClause objects exist with the same key, or if its length is too long and truncation is required. value Initial value for this bind param. This value may be overridden by the dictionary of parameters sent to statement compilation/execution. callable_ A callable function that takes the place of value. The function will be called at statement execution time to determine the ultimate value. Used for scenarios where the actual bind value cannot be determined at the point at which the clause construct is created, but embeded bind values are still desirable. type_ A TypeEngine object that will be used to pre-process the value corresponding to this _BindParamClause at execution time. unique if True, the key name of this BindParamClause will be modied if another _BindParamClause of the same name already has been located within the containing ClauseElement. required a value is required at execution time. isoutparam if True, the parameter should be treated like a stored procedure OUT parameter. re-
289
compare(other, **kw) Compare this _BindParamClause to the given clause. class sqlalchemy.sql.expression.ClauseElement Bases: sqlalchemy.sql.visitors.Visitable Base class for elements of a programmatically constructed SQL expression. compare(other, **kw) Compare this ClauseElement to the given ClauseElement. Subclasses should override the default behavior, which is a straight identity comparison. **kw are arguments consumed by subclass compare() methods and may be used to modify the criteria for comparison. (see ColumnElement) compile(bind=None, dialect=None, **kw) Compile this SQL expression. The return value is a Compiled object. Calling str() or unicode() on the returned value will yield a string representation of the result. The Compiled object also can return a dictionary of bind parameter names and values using the params accessor. Parameters bind An Engine or Connection from which a Compiled will be acquired. This argument takes precedence over this ClauseElements bound engine, if any. column_keys Used for INSERT and UPDATE statements, a list of column names which should be present in the VALUES clause of the compiled statement. If None, all columns from the target table object are rendered. dialect A Dialect instance frmo which a Compiled will be acquired. This argument takes precedence over the bind argument as well as this ClauseElements bound engine, if any. inline Used for INSERT statements, for a dialect which does not support inline retrieval of newly generated primary key columns, will force the expression used to create the new primary key value to be rendered inline within the INSERT statements VALUES clause. This typically refers to Sequence execution but may also refer to any server-side default generation function associated with a primary key Column. execute(*multiparams, **params) Compile and execute this ClauseElement. Deprecated since version 0.7: Only SQL expressions which subclass Executable may provide the execute() method. get_children(**kwargs) Return immediate child elements of this ClauseElement. This is used for visit traversal. **kwargs may contain ags that change the collection that is returned, for example to return a subset of items in order to cut down on larger traversals, or to return child items from a different context (such as schema-level collections instead of clause-level). params(*optionaldict, **kwargs) Return a copy with bindparam() elments replaced. Returns a copy of this ClauseElement with bindparam() elements replaced with values taken from the given dictionary: >>> clause = column(x) + bindparam(foo) >>> print clause.compile().params
290
{foo:None} >>> print clause.params({foo:7}).compile().params {foo:7} scalar(*multiparams, **params) Compile and execute this ClauseElement, returning Deprecated since version 0.7: Only SQL expressions which subclass Executable may provide the scalar() method. the results scalar representation. self_group(against=None) Apply a grouping to this ClauseElement. This method is overridden by subclasses to return a grouping construct, i.e. parenthesis. In particular its used by binary expressions to provide a grouping around themselves when placed into a larger expression, as well as by select() constructs when placed into the FROM clause of another select(). (Note that subqueries should be normally created using the Select.alias() method, as many platforms require nested SELECT statements to be named). As expressions are composed together, the application of self_group() is automatic - end-user code should never need to use this method directly. Note that SQLAlchemys clause constructs take operator precedence into account - so parenthesis might not be needed, for example, in an expression like x OR (y AND z) - AND takes precedence over OR. The base self_group() method of ClauseElement just returns self. unique_params(*optionaldict, **kwargs) Return a copy with bindparam() elments replaced. Same functionality as params(), except adds unique=True to affected bind parameters so that multiple statements can be used. class sqlalchemy.sql.expression.ClauseList(*clauses, **kwargs) Bases: sqlalchemy.sql.expression.ClauseElement Describe a list of clauses, separated by an operator. By default, is comma-separated, such as a column listing. compare(other, **kw) Compare this ClauseList to the given ClauseList, including a comparison of all the clause items. class sqlalchemy.sql.expression.ColumnClause(text, selectable=None, type_=None, is_literal=False) Bases: sqlalchemy.sql.expression._Immutable, sqlalchemy.sql.expression.ColumnElement Represents a generic column expression from any textual string. This includes columns associated with tables, aliases and select statements, but also any arbitrary text. May or may not be bound to an underlying Selectable. ColumnClause is constructed by itself typically via the column() function. It may be placed directly into constructs such as select() constructs: from sqlalchemy.sql import column, select c1, c2 = column("c1"), column("c2") s = select([c1, c2]).where(c1==5) There is also a variant on column() known as literal_column() - the difference is that in the latter case, the string value is assumed to be an exact expression, rather than a column name, so that no quoting rules or similar are applied:
291
from sqlalchemy.sql import literal_column, select s = select([literal_column("5 + 7")]) ColumnClause can also be used in a table-like fashion by combining the column() function with the table() function, to produce a lightweight form of table metadata: from sqlalchemy.sql import table, column user = table("user", column("id"), column("name"), column("description"), ) The above construct can be created in an ad-hoc fashion and is not associated with any schema.MetaData, unlike its more full edged schema.Table counterpart. Parameters text the text of the element. selectable parent selectable. type types.TypeEngine object which can associate this ColumnClause with a type. is_literal if True, the ColumnClause is assumed to be an exact expression that will be delivered to the output with no quoting rules applied regardless of case sensitive settings. the literal_column() function is usually used to create such a ColumnClause. class sqlalchemy.sql.expression.ColumnCollection(*cols) Bases: sqlalchemy.util._collections.OrderedProperties An ordered dictionary that stores a list of ColumnElement instances. Overrides the __eq__() method to produce SQL clauses between sets of correlated columns. add(column) Add a column to this collection. The key attribute of the column will be used as the hash key for this dictionary. replace(column) add the given column to this collection, removing unaliased versions of this column as well as existing columns with the same key. e.g.: t = Table(sometable, metadata, Column(col1, Integer)) t.columns.replace(Column(col1, Integer, key=columnone)) will remove the original col1 from the collection, and add the new column under the name columnname. Used by schema.Column to override columns during table reection. class sqlalchemy.sql.expression.ColumnElement Bases: sqlalchemy.sql.expression.ClauseElement, sqlalchemy.sql.expression._CompareMixin Represent an element that is usable within the column clause portion of a SELECT statement.
292
This includes columns associated with tables, aliases, and subqueries, expressions, function calls, SQL keywords such as NULL, literals, etc. ColumnElement is the ultimate base class for all such elements. ColumnElement supports the ability to be a proxy element, which indicates that the ColumnElement may be associated with a Selectable which was derived from another Selectable. An example of a derived Selectable is an Alias of a Table. A ColumnElement, by subclassing the _CompareMixin mixin class, provides the ability to generate new ClauseElement objects using Python expressions. See the _CompareMixin docstring for more details. anon_label provides a constant anonymous label for this ColumnElement. This is a label() expression which will be named at compile time. The same label() is returned each time anon_label is called so that expressions can reference anon_label multiple times, producing the same label name at compile time. the compiler uses this function automatically at compile time for expressions that are known to be unnamed like binary expressions and function calls. compare(other, use_proxies=False, equivalents=None, **kw) Compare this ColumnElement to another. Special arguments understood: Parameters use_proxies when True, consider two columns that share a common base column as equivalent (i.e. shares_lineage()) equivalents a dictionary of columns as keys mapped to sets of columns. If the given other column is present in this dictionary, if any of the columns in the correponding set() pass the comparison test, the result is True. This is used to expand the comparison to other columns that may be known to be equivalent to this one via foreign key or other criterion. shares_lineage(othercolumn) Return True if the given ColumnElement has a common ancestor to this ColumnElement. class sqlalchemy.sql.expression._CompareMixin Bases: sqlalchemy.sql.operators.ColumnOperators Denes comparison and math operations for ClauseElement instances. See ColumnOperators and Operators for descriptions of all operations. asc() See ColumnOperators.asc(). between(cleft, cright) See ColumnOperators.between(). collate(collation) See ColumnOperators.collate(). contains(other, escape=None) See ColumnOperators.contains(). desc() See ColumnOperators.desc(). distinct() See ColumnOperators.distinct().
293
endswith(other, escape=None) See ColumnOperators.endswith(). in_(other) See ColumnOperators.in_(). label(name) Produce a column label, i.e. <columnname> AS <name>. This is a shortcut to the label() function. if name is None, an anonymous label name will be generated. match(other) See ColumnOperators.match(). nullsfirst() See ColumnOperators.nullsfirst(). nullslast() See ColumnOperators.nullslast(). op(operator) See ColumnOperators.op(). startswith(other, escape=None) See ColumnOperators.startswith(). class sqlalchemy.sql.operators.ColumnOperators Bases: sqlalchemy.sql.operators.Operators Denes comparison and math operations. By default all methods call down to Operators.operate() or Operators.reverse_operate() passing in the appropriate operator function from the Python builtin operator module or a SQLAlchemyspecic operator function from sqlalchemy.expression.operators. For example the __eq__ function: def __eq__(self, other): return self.operate(operators.eq, other) Where operators.eq is essentially: def eq(a, b): return a == b A SQLAlchemy construct like ColumnElement ultimately overrides Operators.operate() and others to return further ClauseElement constructs, so that the == operation above is replaced by a clause construct. The docstrings here will describe column-oriented behavior of each operator. For ORM-based operators on related objects and collections, see RelationshipProperty.Comparator. __eq__(other) Implement the == operator. In a column context, produces the clause a = b. If the target is None, produces a IS NULL. __ne__(other) Implement the != operator. In a column context, produces the clause a != b. If the target is None, produces a IS NOT NULL.
294
__gt__(other) Implement the > operator. In a column context, produces the clause a > b. __ge__(other) Implement the >= operator. In a column context, produces the clause a >= b. __lt__(other) Implement the < operator. In a column context, produces the clause a < b. __le__(other) Implement the <= operator. In a column context, produces the clause a <= b. __neg__() Implement the - operator. In a column context, produces the clause -a. __add__(other) Implement the + operator. In a column context, produces the clause a + b if the parent object has non-string afnity. If the parent object has a string afnity, produces the concatenation operator, a || b - see concat(). __mul__(other) Implement the * operator. In a column context, produces the clause a * b. __div__(other) Implement the / operator. In a column context, produces the clause a / b. __truediv__(other) Implement the // operator. In a column context, produces the clause a / b. __sub__(other) Implement the - operator. In a column context, produces the clause a - b. __radd__(other) Implement the + operator in reverse. See __add__(). __rsub__(other) Implement the - operator in reverse. See __sub__(). __rtruediv__(other) Implement the // operator in reverse. See __truediv__().
295
__rdiv__(other) Implement the / operator in reverse. See __div__(). __rmul__(other) Implement the * operator in reverse. See __mul__(). __mod__(other) Implement the % operator. In a column context, produces the clause a % b. __eq__(other) Implement the == operator. In a column context, produces the clause a = b. If the target is None, produces a IS NULL. __init__ x.__init__(...) initializes x; see x.__class__.__doc__ for signature __le__(other) Implement the <= operator. In a column context, produces the clause a <= b. __lt__(other) Implement the < operator. In a column context, produces the clause a < b. __ne__(other) Implement the != operator. In a column context, produces the clause a != b. If the target is None, produces a IS NOT NULL. asc() Produce a asc() clause against the parent object. between(cleft, cright) Produce a between() clause against the parent object, given the lower and upper range. collate(collation) Produce a collate() clause against the parent object, given the collation string. concat(other) Implement the concat operator. In a column context, produces the clause a || b, or uses the concat() operator on MySQL. contains(other, **kwargs) Implement the contains operator. In a column context, produces the clause LIKE %<other>% desc() Produce a desc() clause against the parent object. distinct() Produce a distinct() clause against the parent object. endswith(other, **kwargs) Implement the endswith operator.
296
In a column context, produces the clause LIKE %<other> ilike(other, escape=None) Implement the ilike operator. In a column context, produces the clause a ILIKE other. in_(other) Implement the in operator. In a column context, produces the clause a IN other. other may be a tuple/list of column expressions, or a select() construct. like(other, escape=None) Implement the like operator. In a column context, produces the clause a LIKE other. match(other, **kwargs) Implements the match operator. In a column context, this produces a MATCH clause, i.e. MATCH <other>. The allowed contents of other are database backend specic. nullsfirst() Produce a nullsfirst() clause against the parent object. nullslast() Produce a nullslast() clause against the parent object. op(opstring) produce a generic operator function. e.g.: somecolumn.op("*")(5) produces: somecolumn * 5 Parameters operator a string which will be output as the inx operator between this ClauseElement and the expression passed to the generated function. This function can also be used to make bitwise operators explicit. For example: somecolumn.op(&)(0xff) is a bitwise AND of the value in somecolumn. operate(op, *other, **kwargs) Operate on an argument. This is the lowest level of operation, raises NotImplementedError by default. Overriding this on a subclass can allow common behavior to be applied to all operations. For example, overriding ColumnOperators to apply func.lower() to the left and right side: class MyComparator(ColumnOperators): def operate(self, op, other): return op(func.lower(self), func.lower(other))
297
Parameters op Operator callable. *other the other side of the operation. Will be a single scalar for most operations. **kwargs modiers. These may be passed by special operators such as ColumnOperators.contains(). reverse_operate(op, other, **kwargs) Reverse operate on an argument. Usage is the same as operate(). startswith(other, **kwargs) Implement the startwith operator. In a column context, produces the clause LIKE <other>% timetuple Hack, allows datetime objects to be compared on the LHS. class sqlalchemy.sql.expression.CompoundSelect(keyword, *selects, **kwargs) Bases: sqlalchemy.sql.expression._SelectBase Forms the basis of UNION, UNION ALL, and other SELECT-based set operations. class sqlalchemy.sql.expression.Delete(table, whereclause, **kwargs) Bases: sqlalchemy.sql.expression.UpdateBase Represent a DELETE construct. The Delete object is created using the delete() function. where(whereclause) Add the given WHERE clause to a newly returned delete construct. class sqlalchemy.sql.expression.Executable Bases: sqlalchemy.sql.expression._Generative Mark a ClauseElement as supporting execution. Executable is a superclass for all statement types of objects, including select(), delete(), update(), insert(), text(). bind Returns the Engine or Connection to which this Executable is bound, or None if none found. This is a traversal which checks locally, then checks among the from clauses of associated objects until a bound engine or connection is found. execute(*multiparams, **params) Compile and execute this Executable. execution_options(**kw) Set non-SQL options for the statement which take effect during execution. Execution options can be set on a per-statement or per Connection basis. Additionally, the Engine and ORM Query objects provide access to execution options which they in turn congure upon connections. The execution_options() method is generative. A new instance of this statement is returned that contains the options: bind=None, returning=None,
298
statement = select([table.c.x, table.c.y]) statement = statement.execution_options(autocommit=True) Note that only a subset of possible execution options can be applied to a statement - these include autocommit and stream_results, but not isolation_level or compiled_cache. See Connection.execution_options() for a full list of possible options. See also: Connection.execution_options() Query.execution_options() scalar(*multiparams, **params) Compile and execute this Executable, returning the results scalar representation. class sqlalchemy.sql.expression.FunctionElement(*clauses, **kwargs) Bases: sqlalchemy.sql.expression.Executable, sqlalchemy.sql.expression.ColumnElement, sqlalchemy.sql.expression.FromClause Base for SQL function-oriented constructs. __init__(*clauses, **kwargs) Construct a FunctionElement. clauses Return the underlying ClauseList which contains the arguments for this FunctionElement. columns Fulll the columns contrct of ColumnElement. Returns a single-element list consisting of this object. execute() Execute this FunctionElement against an embedded bind. This rst calls select() to produce a SELECT construct. Note that FunctionElement can be passed to the Connectable.execute() method of Connection or Engine. over(partition_by=None, order_by=None) Produce an OVER clause against this function. Used against aggregate or so-called window functions, for database backends that support window functions. The expression: func.row_number().over(order_by=x) is shorthand for: from sqlalchemy import over over(func.row_number(), order_by=x) See over() for a full description. New in 0.7. scalar() Execute this FunctionElement against an embedded bind and return a scalar value.
299
This rst calls select() to produce a SELECT construct. Note that FunctionElement can be passed to the Connectable.scalar() method of Connection or Engine. select() Produce a select() construct against this FunctionElement. This is shorthand for: s = select([function_element]) class sqlalchemy.sql.expression.Function(name, *clauses, **kw) Bases: sqlalchemy.sql.expression.FunctionElement Describe a named SQL function. See the superclass FunctionElement for a description of public methods. __init__(name, *clauses, **kw) Construct a Function. The func construct is normally used to construct new Function instances. class sqlalchemy.sql.expression.FromClause Bases: sqlalchemy.sql.expression.Selectable Represent an element that can be used within the FROM clause of a SELECT statement. alias(name=None) return an alias of this FromClause. This is shorthand for calling: from sqlalchemy import alias a = alias(self, name=name) See alias() for details. c attrgetter(attr, ...) > attrgetter object Return a callable object that fetches the given attribute(s) from its operand. After, f=attrgetter(name), the call f(r) returns r.name. After, g=attrgetter(name, date), the call g(r) returns (r.name, r.date). After, h=attrgetter(name.rst, name.last), the call h(r) returns (r.name.rst, r.name.last). columns Return the collection of Column objects contained by this FromClause. correspond_on_equivalents(column, equivalents) Return corresponding_column for the given column, or if None search for a match in the given dictionary. corresponding_column(column, require_embedded=False) Given a ColumnElement, return the exported ColumnElement object from this Selectable which corresponds to that original Column via a common anscestor column. Parameters column the target ColumnElement to be matched require_embedded only return corresponding columns for
300
the given ColumnElement, if the given ColumnElement is actually present within a sub-element of this FromClause. Normally the column will match if it merely shares a common anscestor with one of the exported columns of this FromClause. count(whereclause=None, **params) return a SELECT COUNT generated against this FromClause. description a brief description of this FromClause. Used primarily for error message formatting. foreign_keys Return the collection of ForeignKey objects which this FromClause references. is_derived_from(fromclause) Return True if this FromClause is derived from the given FromClause. An example would be an Alias of a Table is derived from that Table. join(right, onclause=None, isouter=False) return a join of this FromClause against another FromClause. outerjoin(right, onclause=None) return an outer join of this FromClause against another FromClause. primary_key Return the collection of Column objects which comprise the primary key of this FromClause. replace_selectable(old, alias) replace all occurrences of FromClause old with the given Alias object, returning a copy of this FromClause. select(whereclause=None, **params) return a SELECT of this FromClause. class sqlalchemy.sql.expression.Insert(table, values=None, inline=False, bind=None, prexes=None, returning=None, **kwargs) Bases: sqlalchemy.sql.expression.ValuesBase Represent an INSERT construct. The Insert object is created using the insert() function. See also: Insert Expressions prefix_with(clause) Add a word or expression between INSERT and INTO. Generative. If multiple prexes are supplied, they will be separated with spaces. values(*args, **kwargs) specify the VALUES clause for an INSERT statement, or the SET clause for an UPDATE. Parameters **kwargs key value pairs representing the string key of a Column mapped to the value to be rendered into the VALUES or SET clause: users.insert().values(name="some name") users.update().where(users.c.id==5).values(name="some name")
301
*args A single dictionary can be sent as the rst positional argument. This allows nonstring based keys, such as Column objects, to be used: users.insert().values({users.c.name : "some name"}) users.update().where(users.c.id==5).values({users.c.name : "some name"}) returning(*cols) Add a RETURNING or equivalent clause to this statement. The given list of columns represent columns within the table that is the target of the INSERT, UPDATE, or DELETE. Each element can be any column expression. Table objects will be expanded into their individual columns. Upon compilation, a RETURNING clause, or database equivalent, will be rendered within the statement. For INSERT and UPDATE, the values are the newly inserted/updated values. For DELETE, the values are those of the rows which were deleted. Upon execution, the values of the columns to be returned are made available via the result set and can be iterated using fetchone() and similar. For DBAPIs which do not natively support returning values (i.e. cx_oracle), SQLAlchemy will approximate this behavior at the result level so that a reasonable amount of behavioral neutrality is provided. Note that not all databases/DBAPIs support RETURNING. For those backends with no support, an exception is raised upon compilation and/or execution. For those who do support it, the functionality across backends varies greatly, including restrictions on executemany() and other statements which return multiple rows. Please read the documentation notes for the database in use in order to determine the availability of RETURNING. class sqlalchemy.sql.expression.Join(left, right, onclause=None, isouter=False) Bases: sqlalchemy.sql.expression.FromClause represent a JOIN construct between two FromClause elements. The public constructor function for Join is the module-level join() function, as well as the join() method available off all FromClause subclasses. __init__(left, right, onclause=None, isouter=False) Construct a new Join. The usual entrypoint here is the join() function or the FromClause.join() method of any FromClause object. alias(name=None) return an alias of this Join. Used against a Join object, alias() calls the select() method rst so that a subquery against a select() construct is generated. the select() construct also has the correlate ag set to False and will not auto-correlate inside an enclosing select() construct. The equivalent long-hand form, given a Join object j, is: from sqlalchemy import select, alias j = alias( select([j.left, j.right]).\ select_from(j).\ with_labels(True).\ correlate(False), name=name )
302
See alias() for further details on aliases. select(whereclause=None, fold_equivalents=False, **kwargs) Create a Select from this Join. The equivalent long-hand form, given a Join object j, is: from sqlalchemy import select j = select([j.left, j.right], **kw).\ where(whereclause).\ select_from(j) Parameters whereclause the WHERE criterion that will be sent to the select() function fold_equivalents based on the join criterion of this Join, do not include repeat column names in the column list of the resulting select, for columns that are calculated to be equivalent based on the join criterion of this Join. This will recursively apply to any joins directly nested by this one as well. **kwargs all other kwargs are sent to the underlying select() function. class sqlalchemy.sql.expression.Operators Base of comparison and logical operators. Implements base methods operate() and reverse_operate(), as well as __and__(), __or__(), __invert__(). Usually is used via its most common subclass ColumnOperators. __and__(other) Implement the & operator. When used with SQL expressions, results in an AND operation, equivalent to and_(), that is: a & b is equivalent to: from sqlalchemy import and_ and_(a, b) Care should be taken when using & regarding operator precedence; the & operator has the highest precedence. The operands should be enclosed in parenthesis if they contain further sub expressions: (a == 2) & (b == 4) __or__(other) Implement the | operator. When used with SQL expressions, results in an OR operation, equivalent to or_(), that is: a | b is equivalent to: from sqlalchemy import or_ or_(a, b)
303
Care should be taken when using | regarding operator precedence; the | operator has the highest precedence. The operands should be enclosed in parenthesis if they contain further sub expressions: (a == 2) | (b == 4) __invert__() Implement the ~ operator. When used with SQL expressions, results in a NOT operation, equivalent to not_(), that is: ~a is equivalent to: from sqlalchemy import not_ not_(a) op(opstring) produce a generic operator function. e.g.: somecolumn.op("*")(5) produces: somecolumn * 5 Parameters operator a string which will be output as the inx operator between this ClauseElement and the expression passed to the generated function. This function can also be used to make bitwise operators explicit. For example: somecolumn.op(&)(0xff) is a bitwise AND of the value in somecolumn. operate(op, *other, **kwargs) Operate on an argument. This is the lowest level of operation, raises NotImplementedError by default. Overriding this on a subclass can allow common behavior to be applied to all operations. For example, overriding ColumnOperators to apply func.lower() to the left and right side: class MyComparator(ColumnOperators): def operate(self, op, other): return op(func.lower(self), func.lower(other)) Parameters op Operator callable. *other the other side of the operation. Will be a single scalar for most operations. **kwargs modiers. These may be passed by special operators such as ColumnOperators.contains().
304
reverse_operate(op, other, **kwargs) Reverse operate on an argument. Usage is the same as operate(). class sqlalchemy.sql.expression.Select(columns, whereclause=None, from_obj=None, distinct=False, having=None, correlate=True, prexes=None, **kwargs) Bases: sqlalchemy.sql.expression._SelectBase Represents a SELECT statement. See also: select() - the function which creates a Select object. Selecting - Core Tutorial description of select(). __init__(columns, whereclause=None, from_obj=None, distinct=False, having=None, correlate=True, prexes=None, **kwargs) Construct a Select object. The public constructor for Select is the select() function; see that function for argument descriptions. Additional generative and mutator methods are available on the _SelectBase superclass. append_column(column) append the given column expression to the columns clause of this select() construct. append_correlation(fromclause) append the given correlation expression to this select() construct. append_from(fromclause) append the given FromClause expression to this select() constructs FROM clause. append_having(having) append the given expression to this select() constructs HAVING criterion. The expression will be joined to existing HAVING criterion via AND. append_prefix(clause) append the given columns clause prex expression to this select() construct. append_whereclause(whereclause) append the given expression to this select() constructs WHERE criterion. The expression will be joined to existing WHERE criterion via AND. column(column) return a new select() construct with the given column expression added to its columns clause. correlate(*fromclauses) return a new select() construct which will correlate the given FROM clauses to that of an enclosing select(), if a match is found. By match, the given fromclause must be present in this selects list of FROM objects and also present in an enclosing selects list of FROM objects. Calling this method turns off the selects default behavior of auto-correlation. Normally, select() autocorrelates all of its FROM clauses to those of an embedded select when compiled. If the fromclause is None, correlation is disabled for the returned select(). distinct(*expr) Return a new select() construct which will apply DISTINCT to its columns clause.
305
Parameters *expr optional column expressions. When present, the Postgresql dialect will render a DISTINCT ON (<expressions>>) construct. except_(other, **kwargs) return a SQL EXCEPT of this select() construct against the given selectable. except_all(other, **kwargs) return a SQL EXCEPT ALL of this select() construct against the given selectable. froms Return the displayed list of FromClause elements. get_children(column_collections=True, **kwargs) return child elements as per the ClauseElement specication. having(having) return a new select() construct with the given expression added to its HAVING clause, joined to the existing clause via AND, if any. inner_columns an iterator of all ColumnElement expressions which would be rendered into the columns clause of the resulting SELECT statement. intersect(other, **kwargs) return a SQL INTERSECT of this select() construct against the given selectable. intersect_all(other, **kwargs) return a SQL INTERSECT ALL of this select() construct against the given selectable. locate_all_froms return a Set of all FromClause elements referenced by this Select. This set is a superset of that returned by the froms property, which is specically for those FromClause elements that would actually be rendered. prefix_with(*expr) return a new select() construct which will apply the given expressions, typically strings, to the start of its columns clause, not using any commas. In particular is useful for MySQL keywords. e.g.: select([a, b]).prefix_with(HIGH_PRIORITY, SQL_SMALL_RESULT, ALL) Would render: SELECT HIGH_PRIORITY SQL_SMALL_RESULT ALL a, b select_from(fromclause) return a new Select construct with the given FROM expression merged into its list of FROM objects. The from list is a unique set on the identity of each element, so adding an already present Table or other selectable will have no effect. Passing a Join that refers to an already present Table or other selectable will have the effect of concealing the presence of that selectable as an individual element in the rendered FROM list, instead rendering it into a JOIN clause. self_group(against=None) return a grouping construct as per the ClauseElement specication. This produces an element that can be embedded in an expression. Note
306
that this method is called automatically as needed when constructing expressions. union(other, **kwargs) return a SQL UNION of this select() construct against the given selectable. union_all(other, **kwargs) return a SQL UNION ALL of this select() construct against the given selectable. where(whereclause) return a new select() construct with the given expression added to its WHERE clause, joined to the existing clause via AND, if any. with_hint(selectable, text, dialect_name=*) Add an indexing hint for the given selectable to this Select. The text of the hint is rendered in the appropriate location for the database backend in use, relative to the given Table or Alias passed as the selectable argument. The dialect implementation typically uses Python string substitution syntax with the token %(name)s to render the name of the table or alias. E.g. when using Oracle, the following: select([mytable]).\ with_hint(mytable, "+ index(%(name)s ix_mytable)") Would render SQL as: select /*+ index(mytable ix_mytable) */ ... from mytable The dialect_name option will limit the rendering of a particular hint to a particular backend. Such as, to add hints for both Oracle and Sybase simultaneously: select([mytable]).\ with_hint(mytable, "+ index(%(name)s ix_mytable)", oracle).\ with_hint(mytable, "WITH INDEX ix_mytable", sybase) with_only_columns(columns) return a new select() construct with its columns clause replaced with the given columns. class sqlalchemy.sql.expression.Selectable Bases: sqlalchemy.sql.expression.ClauseElement mark a class as being selectable class sqlalchemy.sql.expression._SelectBase(use_labels=False, for_update=False, limit=None, offset=None, order_by=None, group_by=None, bind=None, autocommit=None) Bases: sqlalchemy.sql.expression.Executable, sqlalchemy.sql.expression.FromClause Base class for Select and CompoundSelects. append_group_by(*clauses) Append the given GROUP BY criterion applied to this selectable. The criterion will be appended to any pre-existing GROUP BY criterion. append_order_by(*clauses) Append the given ORDER BY criterion applied to this selectable. The criterion will be appended to any pre-existing ORDER BY criterion.
307
apply_labels() return a new selectable with the use_labels ag set to True. This will result in column expressions being generated using labels against their table name, such as SELECT somecolumn AS tablename_somecolumn. This allows selectables which contain multiple FROM clauses to produce a unique set of column names regardless of name conicts among the individual FROM clauses. as_scalar() return a scalar representation of this selectable, which can be used as a column expression. Typically, a select statement which has only one column in its columns clause is eligible to be used as a scalar expression. The returned object is an instance of _ScalarSelect. autocommit() return a new selectable with the autocommit ag set to Deprecated since version 0.6: autocommit() is deprecated. Use Executable.execution_options() with the autocommit ag. True. group_by(*clauses) return a new selectable with the given list of GROUP BY criterion applied. The criterion will be appended to any pre-existing GROUP BY criterion. label(name) return a scalar representation of this selectable, embedded as a subquery with a label. See also as_scalar(). limit(limit) return a new selectable with the given LIMIT criterion applied. offset(offset) return a new selectable with the given OFFSET criterion applied. order_by(*clauses) return a new selectable with the given list of ORDER BY criterion applied. The criterion will be appended to any pre-existing ORDER BY criterion. class sqlalchemy.sql.expression.TableClause(name, *columns) Bases: sqlalchemy.sql.expression._Immutable, sqlalchemy.sql.expression.FromClause Represents a minimal table construct. The constructor for TableClause is the table() function. This produces a lightweight table object that has only a name and a collection of columns, which are typically produced by the column() function: from sqlalchemy.sql import table, column user = table("user", column("id"), column("name"), column("description"), ) The TableClause construct serves as the base for the more commonly used Table object, providing the usual set of FromClause services including the .c. collection and statement generation methods.
308
It does not provide all the additional schema-level services of Table, including constraints, references to other tables, or support for MetaData-level services. Its useful on its own as an ad-hoc construct used to generate quick SQL statements when a more fully edged Table is not on hand. count(whereclause=None, **params) return a SELECT COUNT generated against this TableClause. delete(whereclause=None, **kwargs) Generate a delete() construct. insert(values=None, inline=False, **kwargs) Generate an insert() construct. update(whereclause=None, values=None, inline=False, **kwargs) Generate an update() construct. class sqlalchemy.sql.expression.Update(table, whereclause, values=None, inline=False, bind=None, returning=None, **kwargs) Bases: sqlalchemy.sql.expression.ValuesBase Represent an Update construct. The Update object is created using the update() function. where(whereclause) return a new update() construct with the given expression added to its WHERE clause, joined to the existing clause via AND, if any. values(*args, **kwargs) specify the VALUES clause for an INSERT statement, or the SET clause for an UPDATE. Parameters **kwargs key value pairs representing the string key of a Column mapped to the value to be rendered into the VALUES or SET clause: users.insert().values(name="some name") users.update().where(users.c.id==5).values(name="some name") *args A single dictionary can be sent as the rst positional argument. This allows nonstring based keys, such as Column objects, to be used: users.insert().values({users.c.name : "some name"}) users.update().where(users.c.id==5).values({users.c.name : "some name"}) class sqlalchemy.sql.expression.UpdateBase Bases: sqlalchemy.sql.expression.Executable, sqlalchemy.sql.expression.ClauseElement Form the base for INSERT, UPDATE, and DELETE statements. params(*arg, **kw) Set the parameters for the statement. This method raises NotImplementedError on the base class, and is overridden by ValuesBase to provide the SET/VALUES clause of UPDATE and INSERT. bind Return a bind linked to this UpdateBase or a Table associated with it.
309
returning(*cols) Add a RETURNING or equivalent clause to this statement. The given list of columns represent columns within the table that is the target of the INSERT, UPDATE, or DELETE. Each element can be any column expression. Table objects will be expanded into their individual columns. Upon compilation, a RETURNING clause, or database equivalent, will be rendered within the statement. For INSERT and UPDATE, the values are the newly inserted/updated values. For DELETE, the values are those of the rows which were deleted. Upon execution, the values of the columns to be returned are made available via the result set and can be iterated using fetchone() and similar. For DBAPIs which do not natively support returning values (i.e. cx_oracle), SQLAlchemy will approximate this behavior at the result level so that a reasonable amount of behavioral neutrality is provided. Note that not all databases/DBAPIs support RETURNING. For those backends with no support, an exception is raised upon compilation and/or execution. For those who do support it, the functionality across backends varies greatly, including restrictions on executemany() and other statements which return multiple rows. Please read the documentation notes for the database in use in order to determine the availability of RETURNING. class sqlalchemy.sql.expression.ValuesBase(table, values) Bases: sqlalchemy.sql.expression.UpdateBase Supplies support for ValuesBase.values() to INSERT and UPDATE constructs. values(*args, **kwargs) specify the VALUES clause for an INSERT statement, or the SET clause for an UPDATE. Parameters **kwargs key value pairs representing the string key of a Column mapped to the value to be rendered into the VALUES or SET clause: users.insert().values(name="some name") users.update().where(users.c.id==5).values(name="some name") *args A single dictionary can be sent as the rst positional argument. This allows nonstring based keys, such as Column objects, to be used: users.insert().values({users.c.name : "some name"}) users.update().where(users.c.id==5).values({users.c.name : "some name"})
310
class sqlalchemy.sql.functions.GenericFunction(type_=None, args=(), **kwargs) Bases: sqlalchemy.sql.expression.Function class sqlalchemy.sql.functions.ReturnTypeFromArgs(*args, **kwargs) Bases: sqlalchemy.sql.functions.GenericFunction Dene a function whose return type is the same as its arguments. class sqlalchemy.sql.functions.char_length(arg, **kwargs) Bases: sqlalchemy.sql.functions.GenericFunction class sqlalchemy.sql.functions.coalesce(*args, **kwargs) Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs class sqlalchemy.sql.functions.concat(*args, **kwargs) Bases: sqlalchemy.sql.functions.GenericFunction class sqlalchemy.sql.functions.count(expression=None, **kwargs) Bases: sqlalchemy.sql.functions.GenericFunction The ANSI COUNT aggregate function. With no arguments, emits COUNT *. class sqlalchemy.sql.functions.current_date(**kwargs) Bases: sqlalchemy.sql.functions.AnsiFunction class sqlalchemy.sql.functions.current_time(**kwargs) Bases: sqlalchemy.sql.functions.AnsiFunction class sqlalchemy.sql.functions.current_timestamp(**kwargs) Bases: sqlalchemy.sql.functions.AnsiFunction class sqlalchemy.sql.functions.current_user(**kwargs) Bases: sqlalchemy.sql.functions.AnsiFunction class sqlalchemy.sql.functions.localtime(**kwargs) Bases: sqlalchemy.sql.functions.AnsiFunction class sqlalchemy.sql.functions.localtimestamp(**kwargs) Bases: sqlalchemy.sql.functions.AnsiFunction class sqlalchemy.sql.functions.max(*args, **kwargs) Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs class sqlalchemy.sql.functions.min(*args, **kwargs) Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs class sqlalchemy.sql.functions.next_value(seq, **kw) Bases: sqlalchemy.sql.expression.Function Represent the next value, given a Sequence as its single argument. Compiles into the appropriate function on each backend, or will raise NotImplementedError if used on a backend that does not provide support for sequences. class sqlalchemy.sql.functions.now(type_=None, args=(), **kwargs) Bases: sqlalchemy.sql.functions.GenericFunction class sqlalchemy.sql.functions.random(*args, **kwargs) Bases: sqlalchemy.sql.functions.GenericFunction class sqlalchemy.sql.functions.session_user(**kwargs) Bases: sqlalchemy.sql.functions.AnsiFunction class sqlalchemy.sql.functions.sum(*args, **kwargs) Bases: sqlalchemy.sql.functions.ReturnTypeFromArgs
311
Where above, an Engine references both a Dialect and a Pool, which together interpret the DBAPIs module functions as well as the behavior of the database. Creating an engine is just a matter of issuing a single call, create_engine(): from sqlalchemy import create_engine engine = create_engine(postgresql://scott:tiger@localhost:5432/mydatabase) The above engine creates a Dialect object tailored towards PostgreSQL, as well as a Pool object which will establish a DBAPI connection at localhost:5432 when a connection request is rst received. Note that the Engine and its underlying Pool do not establish the rst actual DBAPI connection until the Engine.connect() method is called, or an operation which is dependent on this method such as Engine.execute() is invoked. In this way, Engine and Pool can be said to have a lazy initialization behavior. The Engine, once created, can either be used directly to interact with the database, or can be passed to a Session object to work with the ORM. This section covers the details of conguring an Engine. The next section, Working with Engines and Connections, will detail the usage API of the Engine and similar, typically for non-ORM applications.
312
no / OS platform - The DBAPI does not support that platform. partial - the DBAPI is partially usable on the target platform but has major unresolved issues. development - a development version of the dialect exists, but is not yet usable. thirdparty - the dialect itself is maintained by a third party, who should be consulted for information on current support. * - indicates the given DBAPI is the default for SQLAlchemy, i.e. when just the database name is specied Driver DB2/Informix IDS ibm-db Drizzle mysql-python Firebird / Interbase kinterbasdb Informix informixdb MaxDB sapdb Microsoft Access pyodbc Microsoft SQL Server adodbapi jTDS JDBC Driver mxodbc pyodbc pymssql MySQL MySQL Connector/J MySQL Connector/Python mysql-python OurSQL pymysql Oracle cx_oracle Oracle JDBC Driver Postgresql pg8000 PostgreSQL JDBC Driver psycopg2 pypostgresql SQLite pysqlite sqlite3 Sybase ASE mxodbc pyodbc python-sybase Connect string thirdparty drizzle+mysqldb* firebird+kinterbasdb* informix+informixdb* maxdb+sapdb* access+pyodbc* mssql+adodbapi mssql+zxjdbc mssql+mxodbc mssql+pyodbc* mssql+pymssql mysql+zxjdbc mysql+mysqlconnector mysql+mysqldb* mysql+oursql mysql+pymysql oracle+cx_oracle* oracle+zxjdbc postgresql+pg8000 postgresql+zxjdbc postgresql+psycopg2* postgresql+pypostgresql sqlite+pysqlite* sqlite+pysqlite* sybase+mxodbc sybase+pyodbc* sybase+pysybase Py2K thirdparty yes yes yes development development development no yes yes yes no yes yes yes yes yes no yes no yes no yes yes development partial yes 1 Py3K thirdparty development development development development development development no development development development no yes development yes development development no yes no yes yes yes yes development development development Jython thirdparty no no no no no no development no no no yes no no no no no yes no yes no no no no no no no Unix thirdparty yes yes unknown yes unknown no yes yes with FreeTDS yes with FreeTDS yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes unknown yes
313
314
be modied at any time to turn logging on and off. If set to the string "debug", result rows will be printed to the standard output as well. This ag ultimately controls a Python logger; see Conguring Logging for information on how to congure logging directly. echo_pool=False if True, the connection pool will log all checkouts/checkins to the logging stream, which defaults to sys.stdout. This ag ultimately controls a Python logger; see Conguring Logging for information on how to congure logging directly. encoding=utf-8 the encoding to use for all Unicode translations, both by engine-wide unicode conversion as well as the Unicode type object. execution_options Dictionary execution options which will be applied to all connections. See execution_options() implicit_returning=True When True, a RETURNING- compatible construct, if available, will be used to fetch newly generated primary key values when a single row INSERT statement is emitted with no existing returning() clause. This applies to those backends which support RETURNING or a compatible construct, including Postgresql, Firebird, Oracle, Microsoft SQL Server. Set this to False to disable the automatic usage of RETURNING. label_length=None optional integer value which limits the size of dynamically generated column labels to that many characters. If less than 6, labels are generated as _(counter). If None, the value of dialect.max_identifier_length is used instead. listeners A list of one or more PoolListener objects which will receive connection pool events. logging_name String identier which will be used within the name eld of logging records generated within the sqlalchemy.engine logger. Defaults to a hexstring of the objects id. max_overow=10 the number of connections to allow in connection pool overow, that is connections that can be opened above and beyond the pool_size setting, which defaults to ve. this is only used with QueuePool. module=None reference to a Python module object (the module itself, not its string name). Species an alternate DBAPI module to be used by the engines dialect. Each sub-dialect references a specic DBAPI which will be imported before rst connect. This parameter causes the import to be bypassed, and the given module to be used instead. Can be used for testing of DBAPIs as well as to inject mock DBAPI implementations into the Engine. pool=None an already-constructed instance of Pool, such as a QueuePool instance. If non-None, this pool will be used directly as the underlying connection pool for the engine, bypassing whatever connection parameters are present in the URL argument. For information on constructing connection pools manually, see Connection Pooling. poolclass=None a Pool subclass, which will be used to create a connection pool instance using the connection parameters given in the URL. Note this differs from pool in that you dont actually instantiate the pool in this case, you just indicate what type of pool to be used. pool_logging_name String identier which will be used within the name eld of logging records generated within the sqlalchemy.pool logger. Defaults to a hexstring of the objects id. pool_size=5 the number of connections to keep open inside the connection pool. This used with QueuePool as well as SingletonThreadPool. With QueuePool, a pool_size setting of 0 indicates no limit; to disable pooling, set poolclass to NullPool instead.
315
pool_recycle=-1 this setting causes the pool to recycle connections after the given number of seconds has passed. It defaults to -1, or no timeout. For example, setting to 3600 means connections will be recycled after one hour. Note that MySQL in particular will disconnect automatically if no activity is detected on a connection for eight hours (although this is congurable with the MySQLDB connection itself and the server conguration as well). pool_timeout=30 number of seconds to wait before giving up on getting a connection from the pool. This is only used with QueuePool. strategy=plain selects alternate engine implementations. Currently available is the threadlocal strategy, which is described in Using the Threadlocal Execution Strategy. sqlalchemy.engine_from_config(conguration, prex=sqlalchemy., **kwargs) Create a new Engine instance using a conguration dictionary. The dictionary is typically produced from a cong le where keys are prexed, such as sqlalchemy.url, sqlalchemy.echo, etc. The prex argument indicates the prex to be searched for. A select set of keyword arguments will be coerced to their expected type based on string values. In a future release, this functionality will be expanded and include dialect-specic arguments.
316
oracle_db = create_engine(oracle+cx_oracle://scott:tiger@tnsname) # mssql using ODBC datasource names. PyODBC is the default driver. mssql_db = create_engine(mssql://mydsn) mssql_db = create_engine(mssql+pyodbc://mydsn) mssql_db = create_engine(mssql+adodbapi://mydsn) mssql_db = create_engine(mssql+pyodbc://username:password@mydsn) SQLite connects to le based databases. The same URL format is used, omitting the hostname, and using the le portion as the lename of the database. This has the effect of four slashes being present for an absolute le path: # sqlite://<nohostname>/<path> # where <path> is relative: sqlite_db = create_engine(sqlite:///foo.db) # or absolute, starting with a slash: sqlite_db = create_engine(sqlite:////absolute/path/to/foo.db) To use a SQLite :memory: database, specify an empty URL: sqlite_memory_db = create_engine(sqlite://) The Engine will ask the connection pool for a connection when the connect() or execute() methods are called. The default connection pool, QueuePool, will open connections to the database on an as-needed basis. As concurrent statements are executed, QueuePool will grow its pool of connections to a default size of ve, and will allow a default overow of ten. Since the Engine is essentially home base for the connection pool, it follows that you should keep a single Engine per database established within an application, rather than creating a new one for each connection. Note: QueuePool is not used by default for SQLite engines. See SQLite for details on SQLite connection pool usage. class sqlalchemy.engine.url.URL(drivername, username=None, password=None, port=None, database=None, query=None) Represent the components of a URL used to connect to a database. host=None,
This object is suitable to be passed directly to a create_engine() call. The elds of the URL are parsed from a string by the module-level make_url() function. the string format of the URL is an RFC-1738style string. All initialization parameters are available as public attributes. Parameters drivername the name of the database backend. This name will correspond to a module in sqlalchemy/databases or a third party plug-in. username The user name. password database password. host The name of the host. port The port number. database The database name. query A dictionary of options to be passed to the dialect and/or the DBAPI upon connect. get_dialect() Return the SQLAlchemy database dialect class corresponding to this URLs driver name.
317
translate_connect_args(names=[], **kw) Translate url attributes into a dictionary of connection arguments. Returns attributes of this url (host, database, username, password, port) as a plain dictionary. The attribute names are used as the keys by default. Unset or false attributes are omitted from the nal dictionary. Parameters **kw Optional, alternate key names for url attributes. names Deprecated. Same purpose as the keyword-based alternate names, but correlates the name to the original positionally.
db = create_engine(postgresql://scott:tiger@localhost/test, connect_args = {argument1:1 The most customizable connection method of all is to pass a creator argument, which species a callable that returns a DBAPI connection: def connect(): return psycopg.connect(user=scott, host=localhost) db = create_engine(postgresql://, creator=connect)
sqlalchemy.dialects - controls custom logging for SQL dialects. See the documentation of individual dialects for details. sqlalchemy.pool - controls connection pool logging. set to logging.INFO or lower to log connection pool checkouts/checkins. sqlalchemy.orm - controls logging of various ORM functions. set to logging.INFO for information on mapper congurations. 318 Chapter 3. SQLAlchemy Core
For example, to log SQL queries using Python logging instead of the echo=True ag: import logging logging.basicConfig() logging.getLogger(sqlalchemy.engine).setLevel(logging.INFO) By default, the log level is set to logging.WARN within the entire sqlalchemy namespace so that no log operations occur, even within an application that has logging enabled otherwise. The echo ags present as keyword arguments to create_engine() and others as well as the echo property on Engine, when set to True, will rst attempt to ensure that logging is enabled. Unfortunately, the logging module provides no way of determining if output has already been congured (note we are referring to if a logging conguration has been set up, not just that the logging level is set). For this reason, any echo=True ags will result in a call to logging.basicConfig() using sys.stdout as the destination. It also sets up a default format using the level name, timestamp, and logger name. Note that this conguration has the affect of being congured in addition to any existing logger congurations. Therefore, when using Python logging, ensure all echo ags are set to False at all times, to avoid getting duplicate log lines. The logger name of instance such as an Engine or Pool defaults to using a truncated hex identier string. To set this to a specic name, use the logging_name and pool_logging_name keyword arguments with sqlalchemy.create_engine(). Note: The SQLAlchemy Engine conserves Python function call overhead by only emitting log statements when the current logging level is detected as logging.INFO or logging.DEBUG. It only checks this level when a new connection is procured from the connection pool. Therefore when changing the logging conguration for an alreadyrunning application, any Connection thats currently active, or more commonly a Session object thats active in a transaction, wont log any SQL according to the new conguration until a new Connection is procured (in the case of Session, this is after the current transaction ends and a new one begins).
319
The engine can be used directly to issue SQL to the database. The most generic way is rst procure a connection resource, which you get via the connect method: connection = engine.connect() result = connection.execute("select username from users") for row in result: print "username:", row[username] connection.close() The connection is an instance of Connection, which is a proxy object for an actual DBAPI connection. The DBAPI connection is retrieved from the connection pool at the point at which Connection is created. The returned result is an instance of ResultProxy, which references a DBAPI cursor and provides a largely compatible interface with that of the DBAPI cursor. The DBAPI cursor will be closed by the ResultProxy when all of its result rows (if any) are exhausted. A ResultProxy that returns no rows, such as that of an UPDATE statement (without any returned rows), releases cursor resources immediately upon construction. When the close() method is called, the referenced DBAPI connection is returned to the connection pool. From the perspective of the database itself, nothing is actually closed, assuming pooling is in use. The pooling mechanism issues a rollback() call on the DBAPI connection so that any transactional state or locks are removed, and the connection is ready for its next usage. The above procedure can be performed in a shorthand way by using the execute() method of Engine itself: result = engine.execute("select username from users") for row in result: print "username:", row[username] Where above, the execute() method acquires a new Connection on its own, executes the statement with that object, and returns the ResultProxy. In this case, the ResultProxy contains a special ag known as close_with_result, which indicates that when its underlying DBAPI cursor is closed, the Connection object itself is also closed, which again returns the DBAPI connection to the connection pool, releasing transactional resources. If the ResultProxy potentially has rows remaining, it can be instructed to close out its resources explicitly: result.close() If the ResultProxy has pending rows remaining and is dereferenced by the application without being closed, Python garbage collection will ultimately close out the cursor as well as trigger a return of the pooled DBAPI connection resource to the pool (SQLAlchemy achieves this by the usage of weakref callbacks - never the __del__ method) however its never a good idea to rely upon Python garbage collection to manage resources. Our example above illustrated the execution of a textual SQL string. The execute() method can of course accommodate more than that, including the variety of SQL expression constructs described in SQL Expression Language Tutorial.
trans.commit() except: trans.rollback() raise Nesting of Transaction Blocks The Transaction object also handles nested behavior by keeping track of the outermost begin/commit pair. In this example, two functions both issue a transaction on a Connection, but only the outermost Transaction object actually takes effect when it is committed. # method_a starts a transaction and calls method_b def method_a(connection): trans = connection.begin() # open a transaction try: method_b(connection) trans.commit() # transaction is committed here except: trans.rollback() # this rolls back the transaction unconditionally raise
# method_b also starts a transaction def method_b(connection): trans = connection.begin() # open a transaction - this runs in the context of method_a try: connection.execute("insert into mytable values (bat, lala)") connection.execute(mytable.insert(), col1=bat, col2=lala) trans.commit() # transaction is not committed yet except: trans.rollback() # this rolls back the transaction unconditionally raise # open a Connection and call method_a conn = engine.connect() method_a(conn) conn.close() Above, method_a is called rst, which calls connection.begin(). Then it calls method_b. When method_b calls connection.begin(), it just increments a counter that is decremented when it calls commit(). If either method_a or method_b calls rollback(), the whole transaction is rolled back. The transaction is not committed until method_a calls the commit() method. This nesting behavior allows the creation of functions which guarantee that a transaction will be used if one was not already available, but will automatically participate in an enclosing transaction if one exists.
UPDATE, DELETE, as well as data denition language (DDL) statements such as CREATE TABLE, ALTER TABLE, and then issuing a COMMIT automatically if no transaction is in progress. The detection is based on the presence of the autocommit=True execution option on the statement. If the statement is a text-only statement and the ag is not set, a regular expression is used to detect INSERT, UPDATE, DELETE, as well as a variety of other commands for a particular backend: conn = engine.connect() conn.execute("INSERT INTO users VALUES (1, john)") # autocommits
The autocommit feature is only in effect when no Transaction has otherwise been declared. This means the feature is not generally used with the ORM, as the Session object by default always maintains an ongoing Transaction. Full control of the autocommit behavior is available using the generative Connection.execution_options() method provided on Connection, Engine, Executable, using the autocommit ag which will turn on or off the autocommit for the selected scope. For example, a text() construct representing a stored procedure that commits might use it so that a SELECT statement will issue a COMMIT: engine.execute(text("SELECT my_mutating_procedure()").execution_options(autocommit=True))
322
Implicit execution is also connectionless, and calls the execute() method on the expression itself, utilizing the fact that either an Engine or Connection has been bound to the expression object (binding is discussed further in Schema Denition Language): engine = create_engine(sqlite:///file.db) meta.bind = engine result = users_table.select().execute() for row in result: # .... result.close() In both connectionless examples, the Connection is created behind the scenes; the ResultProxy returned by the execute() call references the Connection used to issue the SQL statement. When the ResultProxy is closed, the underlying Connection is closed for us, resulting in the DBAPI connection being returned to the pool with transactional resources removed.
323
call_operation2() db.commit() conn.execute(log_table.insert(), message="Operation succeeded") except: db.rollback() conn.execute(log_table.insert(), message="Operation failed") finally: conn.close() To access the Connection that is bound to the threadlocal scope, call Engine.contextual_connect(): conn = db.contextual_connect() call_operation3(conn) conn.close() Calling close() on the contextual connection does not release its resources until all other usages of that resource are closed as well, including that any ongoing transactions are rolled back or committed.
The constructor here is not public and is only called only by an Engine. See Engine.connect() and Engine.contextual_connect() methods. begin() Begin a transaction and return a transaction handle. The returned object is an instance of Transaction. Repeated calls to begin on the same Connection will create a lightweight, emulated nested transaction. Only the outermost transaction may commit. Calls to commit on inner transactions are ignored. Any transaction in the hierarchy may rollback, however. See also Connection.begin_nested(), Connection.begin_twophase(). begin_nested() Begin a nested transaction and return a transaction handle. The returned object is an instance of NestedTransaction.
324
Nested transactions require SAVEPOINT support in the underlying database. Any transaction in the hierarchy may commit and rollback, however the outermost transaction still controls the overall commit or rollback of the transaction of a whole. See also Connection.begin(), Connection.begin_twophase(). begin_twophase(xid=None) Begin a two-phase or XA transaction and return a transaction handle. The returned object is an instance of TwoPhaseTransaction, which in addition to the methods provided by Transaction, also provides a prepare() method. Parameters xid the two phase transaction id. If not supplied, a random id will be generated. See also Connection.begin(), Connection.begin_twophase(). close() Close this Connection. This results in a release of the underlying database resources, that is, the DBAPI connection referenced internally. The DBAPI connection is typically restored back to the connection-holding Pool referenced by the Engine that produced this Connection. Any transactional state present on the DBAPI connection is also unconditionally released via the DBAPI connections rollback() method, regardless of any Transaction object that may be outstanding with regards to this Connection. After close() is called, the Connection is permanently in a closed state, and will allow no further operations. closed Return True if this connection is closed. connect() Returns self. This Connectable interface method returns self, allowing Connections to be used interchangably with Engines in most situations that require a bind. connection The underlying DB-API connection managed by this Connection. contextual_connect(**kwargs) Returns self. This Connectable interface method returns self, allowing Connections to be used interchangably with Engines in most situations that require a bind. create(entity, **kwargs) Emit CREATE statements for the given schema entity. Deprecated since version 0.7: Use the create() method on the given schema object directly, i.e. Table.create(), Index.create(), MetaData.create_all() detach() Detach the underlying DB-API connection from its connection pool. This Connection instance will remain useable. When closed, the DB-API connection will be literally closed and not returned to its pool. The pool will typically lazily create a new connection to replace the detached connection. This method can be used to insulate the rest of an application from a modied state on a connection (such as a transaction isolation level or similar). Also see PoolListener for a mechanism to modify connection state when connections leave and return to their connection pool.
325
drop(entity, **kwargs) Emit DROP statements for the given schema entity. Deprecated since version 0.7: Use the drop() method on the given schema object directly, i.e. Table.drop(), Index.drop(), MetaData.drop_all() execute(object, *multiparams, **params) Executes the a SQL statement construct and returns a ResultProxy. Parameters object The statement to be executed. May be one of: a plain string any ClauseElement construct that is also a subclass of Executable, such as a select() construct a FunctionElement, such as that generated by func, will be automatically wrapped in a SELECT statement, which is then executed. a DDLElement object a DefaultGenerator object a Compiled object *multiparams/**params represent bound parameter values to be used in the execution. Typically, the format is either a collection of one or more dictionaries passed to *multiparams: conn.execute( table.insert(), {"id":1, "value":"v1"}, {"id":2, "value":"v2"} ) ...or individual key/values interpreted by **params: conn.execute( table.insert(), id=1, value="v1" ) In the case that a plain SQL string is passed, and the underlying DBAPI accepts positional bind parameters, a collection of tuples or individual values in *multiparams may be passed: conn.execute( "INSERT INTO table (id, value) VALUES (?, ?)", (1, "v1"), (2, "v2") ) conn.execute( "INSERT INTO table (id, value) VALUES (?, ?)", 1, "v1" ) Note above, the usage of a question mark ? or other symbol is contingent upon the paramstyle accepted by the DBAPI in use, which may be any of qmark, named, pyformat, format, numeric. See pep-249 for details on paramstyle.
326
To execute a textual SQL statement which uses bound parameters in a DBAPI-agnostic way, use the text() construct. execution_options(**opt) Set non-SQL options for the connection which take effect during execution. The method returns a copy of this Connection which references the same underlying DBAPI connection, but also denes the given execution options which will take effect for a call to execute(). As the new Connection references the same underlying resource, it is probably best to ensure that the copies would be discarded immediately, which is implicit if used as in: result = connection.execution_options(stream_results=True).\ execute(stmt) Connection.execution_options() Executable.execution_options(). only to Connection. Parameters autocommit Available on: Connection, statement. When True, a COMMIT will be invoked after execution when executed in autocommit mode, i.e. when an explicit transaction is not begun on the connection. Note that DBAPI connections by default are always in a transaction - SQLAlchemy uses rules applied to different kinds of statements to determine if COMMIT will be invoked in order to provide its autocommit feature. Typically, all INSERT/UPDATE/DELETE statements as well as CREATE/DROP statements have autocommit behavior enabled; SELECT constructs do not. Use this option when invoking a SELECT or other specic SQL construct where COMMIT is desired (typically when calling stored procedures and such), and an explicit transaction is not in progress. compiled_cache Available on: Connection. A dictionary where Compiled objects will be cached when the Connection compiles a clause expression into a Compiled object. It is the users responsibility to manage the size of this dictionary, which will have keys corresponding to the dialect, clause element, the column names within the VALUES or SET clause of an INSERT or UPDATE, as well as the batch mode for an INSERT or UPDATE statement. The format of this dictionary is not guaranteed to stay the same in future releases. Note that the ORM makes use of its own compiled caches for some operations, including ush operations. The caching used by the ORM internally supersedes a cache dictionary specied here. isolation_level Available on: Connection. Set the transaction isolation level for the lifespan of this connection. Valid values include those string values accepted by the isolation_level parameter passed to create_engine(), and are database specic, including those for SQLite, PostgreSQL - see those dialects documentation for further info. Note that this option necessarily affects the underying DBAPI connection for the lifespan of the originating Connection, and is not per-execution. This setting is not removed until the underying DBAPI connection is returned to the connection pool, i.e. the Connection.close() method is called. stream_results Available on: Connection, statement. Indicate to the dialect that results should be streamed and not pre-buffered, if possible. This is a limitation of many DBAPIs. The ag is currently understood only by the psycopg2 dialect. in_transaction() Return True if a transaction is in progress. accepts all options as those accepted by Additionally, it includes options that are applicable
327
info A collection of per-DB-API connection instance properties. invalidate(exception=None) Invalidate the underlying DBAPI connection associated with this Connection. The underlying DB-API connection is literally closed (if possible), and is discarded. Its source connection pool will typically lazily create a new connection to replace it. Upon the next usage, this Connection will attempt to reconnect to the pool with a new connection. Transactions in progress remain in an opened state (even though the actual transaction is gone); these must be explicitly rolled back before a reconnect on this Connection can proceed. This is to prevent applications from accidentally continuing their transactional operations in a non-transactional state. invalidated Return True if this connection was invalidated. reflecttable(table, include_columns=None) Load table description from the database. Deprecated since version 0.7: Use autoload=True with Table, or use the Inspector object. Given a Table object, reect its columns and properties from the database, populating the given Table object with attributes.. If include_columns (a list or set) is specied, limit the autoload to the given column names. The default implementation uses the Inspector interface to provide the output, building upon the granular table/column/ constraint etc. methods of Dialect. run_callable(callable_, *args, **kwargs) Given a callable object or function, execute it, passing a Connection as the rst argument. The given *args and **kwargs are passed subsequent to the Connection argument. This function, along with Engine.run_callable(), allows a function to be run with a Connection or Engine object without the need to know which one is being dealt with. scalar(object, *multiparams, **params) Executes and returns the rst column of the rst row. The underlying result/cursor is closed after execution. transaction(callable_, *args, **kwargs) Execute the given function within a transaction boundary. The function is passed this Connection as the rst argument, followed by the given *args and **kwargs. This is a shortcut for explicitly invoking Connection.begin(), calling Transaction.commit() upon success or Transaction.rollback() upon an exception raise: def do_something(conn, x, y): conn.execute("some statement", {x:x, y:y}) conn.transaction(do_something, 5, 10) Note that context managers (i.e. the with statement) present a more modern way of accomplishing the above, using the Transaction object as a base: with conn.begin(): conn.execute("some statement", {x:5, y:10}) One advantage to the Connection.transaction() method is that the same method is also available on Engine as Engine.transaction() - this method procures a Connection and then performs
328
the same operation, allowing equivalent usage with either a Connection or Engine without needing to know what kind of object it is. class sqlalchemy.engine.base.Connectable Bases: object Interface for an object which supports execution of SQL constructs. The two implementations of Connectable are Connection and Engine. Connectable must also implement the dialect member which references a Dialect instance. connect(**kwargs) Return a Connection object. Depending on context, this may be self if this object is already an instance of Connection, or a newly procured Connection if this object is an instance of Engine. contextual_connect() Return a Connection object which may be part of an ongoing context. Depending on context, this may be self if this object is already an instance of Connection, or a newly procured Connection if this object is an instance of Engine. create(entity, **kwargs) Emit CREATE statements for the given schema entity. Deprecated since version 0.7: Use the create() method on the given schema object directly, i.e. Table.create(), Index.create(), MetaData.create_all() drop(entity, **kwargs) Emit DROP statements for the given schema entity. Deprecated since version 0.7: Use the drop() method on the given schema object directly, i.e. Table.drop(), Index.drop(), MetaData.drop_all() execute(object, *multiparams, **params) Executes the given construct and returns a ResultProxy. scalar(object, *multiparams, **params) Executes and returns the rst column of the rst row. The underlying cursor is closed after execution. class sqlalchemy.engine.base.Engine(pool, dialect, url, logging_name=None, echo=None, proxy=None, execution_options=None) Bases: sqlalchemy.engine.base.Connectable, sqlalchemy.log.Identified Connects a Pool and Dialect together to provide a source of database connectivity and behavior. An Engine object is instantiated publically using the create_engine() function. See also: Engine Conguration Working with Engines and Connections connect(**kwargs) Return a new Connection object. The Connection object is a facade that uses a DBAPI connection internally in order to communicate with the database. This connection is procured from the connection-holding Pool referenced by this Engine. When the close() method of the Connection object is called, the underlying DBAPI connection is then returned to the connection pool, where it may be used again in a subsequent call to connect().
329
contextual_connect(close_with_result=False, **kwargs) Return a Connection object which may be part of some ongoing context. By default, this method does the same thing as Engine.connect(). Subclasses of Engine may override this method to provide contextual behavior. Parameters close_with_result When True, the rst ResultProxy created by the Connection will call the Connection.close() method of that connection as soon as any pending result rows are exhausted. This is used to supply the connectionless execution behavior provided by the Engine.execute() method. create(entity, connection=None, **kwargs) Emit CREATE statements for the given schema entity. Deprecated since version 0.7: Use the create() method on the given schema object directly, i.e. Table.create(), Index.create(), MetaData.create_all() dispose() Dispose of the connection pool used by this Engine. A new connection pool is created immediately after the old one has been disposed. This new pool, like all SQLAlchemy connection pools, does not make any actual connections to the database until one is rst requested. This method has two general use cases: When a dropped connection is detected, it is assumed that all connections held by the pool are potentially dropped, and the entire pool is replaced. An application may want to use dispose() within a test suite that is creating multiple engines. It is critical to note that dispose() does not guarantee that the application will release all open database connections - only those connections that are checked into the pool are closed. Connections which remain checked out or have been detached from the engine are not affected. driver Driver name of the Dialect in use by this Engine. drop(entity, connection=None, **kwargs) Emit DROP statements for the given schema entity. Deprecated since version 0.7: Use the drop() method on the given schema object directly, i.e. Table.drop(), Index.drop(), MetaData.drop_all() echo When True, enable log output for this element. This has the effect of setting the Python logging level for the namespace of this elements class and object reference. A value of boolean True indicates that the loglevel logging.INFO will be set for the logger, whereas the string value debug will set the loglevel to logging.DEBUG. execute(statement, *multiparams, **params) Executes the given construct and returns a ResultProxy. The arguments are the same as those used by Connection.execute(). Here, a Connection is acquired using the contextual_connect() method, and the statement executed with that connection. The returned ResultProxy is agged such that when the ResultProxy is exhausted and its underlying cursor is closed, the Connection created here will also be closed, which allows its associated DBAPI connection resource to be returned to the connection pool. func Deprecated since version 0.7: Use func to create function constructs.
330
name String name of the Dialect in use by this Engine. raw_connection() Return a raw DBAPI connection from the connection pool. The returned object is a proxied version of the DBAPI connection object used by the underlying driver in use. The object will have all the same behavior as the real DBAPI connection, except that its close() method will result in the connection being returned to the pool, rather than being closed for real. This method provides direct DBAPI connection access for special situations. In most situations, the Connection object should be used, which is procured using the Engine.connect() method. reflecttable(table, connection=None, include_columns=None) Load table description from the database. Deprecated since version 0.7: Use autoload=True with Table, or use the Inspector object. Uses the given Connection, or if None produces its own Connection, and passes the table and include_columns arguments onto that Connection objects Connection.reflecttable() method. The Table object is then populated with new attributes. run_callable(callable_, *args, **kwargs) Given a callable object or function, execute it, passing a Connection as the rst argument. The given *args and **kwargs are passed subsequent to the Connection argument. This function, along with Connection.run_callable(), allows a function to be run with a Connection or Engine object without the need to know which one is being dealt with. table_names(schema=None, connection=None) Return a list of all table names available in the database. Parameters schema Optional, retrieve names from a non-default schema. connection Optional, use a specied contextual_connect for this Engine. connection. Default is the
text(text, *args, **kwargs) Return a text() construct, Deprecated since version 0.7: Use expression.text() to create text constructs. bound to this engine. This is equivalent to: text("SELECT * FROM table", bind=engine) transaction(callable_, *args, **kwargs) Execute the given function within a transaction boundary. The function is passed a newly procured Connection as the rst argument, followed by the given *args and **kwargs. The Connection is then closed (returned to the pool) when the operation is complete. This method can be used interchangeably with Connection.transaction(). See that method for more details on usage as well as a modern alternative using context managers (i.e. the with statement). update_execution_options(**opt) Update the default execution_options dictionary of this Engine. The given keys/values in **opt are added to the default execution options that will be used for all connections. The initial contents of this dictionary can be sent via the execution_options paramter to create_engine(). See Connection.execution_options() for more details on execution options.
331
class sqlalchemy.engine.base.NestedTransaction(connection, parent) Bases: sqlalchemy.engine.base.Transaction Represent a nested, or SAVEPOINT transaction. A new NestedTransaction object may be procured using the Connection.begin_nested() method. The interface is the same as that of Transaction. class sqlalchemy.engine.base.ResultProxy(context) Wraps a DB-API cursor object to provide easier access to row columns. Individual columns may be accessed by their integer position, case-insensitive column name, or by schema.Column object. e.g.: row = fetchone() col1 = row[0] # access via integer position # access via name
col2 = row[col2]
col3 = row[mytable.c.mycol] # access via Column object. ResultProxy also handles post-processing of result column data using TypeEngine objects, which are referenced from the originating SQL statement that produced this result set. close(_autoclose_connection=True) Close this ResultProxy. Closes the underlying DBAPI cursor corresponding to the execution. Note that any data cached within this ResultProxy is still available. For some types of results, this may include buffered rows. If this ResultProxy was generated from an implicit execution, the underlying Connection will also be closed (returns the underlying DBAPI connection to the connection pool.) This method is called automatically when: all result rows are exhausted using the fetchXXX() methods. cursor.description is None. fetchall() Fetch all rows, just like DB-API cursor.fetchall(). fetchmany(size=None) Fetch many rows, just like DB-API cursor.fetchmany(size=cursor.arraysize). If rows are present, the cursor remains open after this is called. Else the cursor is automatically closed and an empty list is returned. fetchone() Fetch one row, just like DB-API cursor.fetchone(). If a row is present, the cursor remains open after this is called. Else the cursor is automatically closed and None is returned. first() Fetch the rst row and then close the result set unconditionally. Returns None if no row is present.
332
inserted_primary_key Return the primary key for the row just inserted. The return value is a list of scalar values corresponding to the list of primary key columns in the target table. This only applies to single row insert() constructs which did not explicitly specify Insert.returning(). Note that primary key columns which specify a server_default clause, or otherwise do not qualify as autoincrement columns (see the notes at Column), and were generated using the database-side default, will appear in this list as None unless the backend supports returning and the insert statement executed with the implicit returning enabled. is_insert True if this ResultProxy is the result of a executing an expression language compiled expression.insert() construct. When True, this implies that the inserted_primary_key attribute is accessible, assuming the statement did not include a user dened returning construct. keys() Return the current set of string keys for rows. last_inserted_ids() Return the primary key for the row just inserted. ResultProxy.inserted_primary_key last_inserted_params() Return the collection of inserted parameters from this execution. last_updated_params() Return the collection of updated parameters from this execution. lastrow_has_defaults() Return lastrow_has_defaults() from the underlying ExecutionContext. See ExecutionContext for details. lastrowid return the lastrowid accessor on the DBAPI cursor. This is a DBAPI specic method and is only functional for those backends which support it, for statements where it is appropriate. Its behavior is not consistent across backends. Usage of this method is normally unnecessary; the inserted_primary_key attribute provides a tuple of primary key values for a newly inserted row, regardless of database backend. postfetch_cols() Return postfetch_cols() from the underlying ExecutionContext. See ExecutionContext for details. returns_rows True if this ResultProxy returns rows. I.e. if it is legal to call the methods fetchone(), fetchmany() fetchall(). rowcount Return the rowcount for this result. The rowcount reports the number of rows affected by an UPDATE or DELETE statement. It has no other uses and is not intended to provide the number of rows present from a SELECT. Deprecated since version 0.6: Use
333
Note that this row count may not be properly implemented in some dialects; this is indicated by supports_sane_rowcount() and supports_sane_multi_rowcount(). rowcount() also may not work at this time for a statement that uses returning(). scalar() Fetch the rst column of the rst row, and close the result set. Returns None if no row is present. supports_sane_multi_rowcount() Return supports_sane_multi_rowcount from the dialect. supports_sane_rowcount() Return supports_sane_rowcount from the dialect. class sqlalchemy.engine.base.RowProxy(parent, row, processors, keymap) Proxy values from a single cursor row. Mostly follows ordered dictionary behavior, mapping result values to the string-based column name, the integer position of the result in the row, as well as Column instances which can be mapped to the original Columns that produced this result set (for results that correspond to constructed SQL expressions). has_key(key) Return True if this RowProxy contains the given key. items() Return a list of tuples, each tuple containing a key/value pair. keys() Return the list of keys as strings represented by this RowProxy. class sqlalchemy.engine.base.Transaction(connection, parent) Bases: object Represent a database transaction in progress. The Transaction object is procured by calling the begin() method of Connection: from sqlalchemy import create_engine engine = create_engine("postgresql://scott:tiger@localhost/test") connection = engine.connect() trans = connection.begin() connection.execute("insert into x (a, b) values (1, 2)") trans.commit() The object provides rollback() and commit() methods in order to control transaction boundaries. It also implements a context manager interface so that the Python with statement can be used with the Connection.begin() method: with connection.begin(): connection.execute("insert into x (a, b) values (1, 2)") The Transaction object is not threadsafe. See also: Connection.begin(), Connection.begin_nested(). close() Close this Transaction. Connection.begin_twophase(),
334
If this transaction is the base transaction in a begin/commit nesting, the transaction will rollback(). Otherwise, the method returns. This is used to cancel a Transaction without affecting the scope of an enclosing transaction. commit() Commit this Transaction. rollback() Roll back this Transaction. class sqlalchemy.engine.base.TwoPhaseTransaction(connection, xid) Bases: sqlalchemy.engine.base.Transaction Represent a two-phase transaction. A new TwoPhaseTransaction object may be procured using the Connection.begin_twophase() method. The interface is the same as that of Transaction with the addition of the prepare() method. prepare() Prepare this TwoPhaseTransaction. After a PREPARE, the transaction can be committed.
335
336
cursor = conn.cursor() cursor.execute("select foo") The purpose of the transparent proxy is to intercept the close() call, such that instead of the DBAPI connection being closed, its returned to the pool: # "close" the connection. # it to the pool. conn.close() Returns
The proxy also returns its contained DBAPI connection to the pool when it is garbage collected, though its not deterministic in Python that this occurs immediately (though it is typical with cPython). A particular pre-created Pool can be shared with one or more engines by passing it to the pool argument of create_engine(): e = create_engine(postgresql://, pool=mypool)
337
c = e.connect() c.execute("SELECT * FROM table") The above example illustrates that no special intervention is needed, the pool continues normally after a disconnection event is detected. However, an exception is raised. In a typical web application using an ORM Session, the above condition would correspond to a single request failing with a 500 error, then the web application continuing normally beyond that. Hence the approach is optimistic in that frequent database restarts are not anticipated.
338
from sqlalchemy import create_engine e = create_engine("mysql://scott:tiger@localhost/test", echo_pool=True) c1 = e.connect() c2 = e.connect() c3 = e.connect() c1.close() c2.close() c3.close() # pool size is now three. print "Restart the server" raw_input() for i in xrange(10): c = e.connect() print c.execute("select 1").fetchall() c.close()
339
events a list of 2-tuples, each of the form (callable, target) which will be passed to event.listen() upon construction. Provided here so that event listeners can be assigned via create_engine before dialect-level listeners are applied. listeners Deprecated. A list of PoolListener-like objects or dictionaries of callables that receive events when DB-API connections are created, checked out and checked in to the pool. This has been superseded by listen(). connect() Return a DBAPI connection from the pool. The connection is instrumented such that when its close() method is called, the connection will be returned to the pool. dispose() Dispose of this pool. This method leaves the possibility of checked-out connections remaining open, as it only affects connections that are idle in the pool. See also the Pool.recreate() method. recreate() Return a new Pool, of the same class as this one and congured with identical creation arguments. This method is used in conjunection with dispose() to close out an entire Pool and create a new one in its place. class sqlalchemy.pool.QueuePool(creator, pool_size=5, max_overow=10, timeout=30, **kw) Bases: sqlalchemy.pool.Pool A Pool that imposes a limit on the number of open connections. QueuePool is the default pooling implementation used for all Engine objects, unless the SQLite dialect is in use. __init__(creator, pool_size=5, max_overow=10, timeout=30, **kw) Construct a QueuePool. Parameters creator a callable function that returns a DB-API connection object. The function will be called with parameters. pool_size The size of the pool to be maintained, defaults to 5. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain. pool_size can be set to 0 to indicate no size limit; to disable pooling, use a NullPool instead. max_overow The maximum overow size of the pool. When the number of checkedout connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overow, and the total number of sleeping connections the pool will allow is pool_size. max_overow can be set to -1 to indicate no overow limit; no limit will be placed on the total number of concurrent connections. Defaults to 10. timeout The number of seconds to wait before giving up on returning a connection. Defaults to 30.
340
recycle If set to non -1, number of seconds between connection recycling, which means upon checkout, if this timeout is surpassed the connection will be closed and replaced with a newly opened connection. Defaults to -1. echo If True, connections being pulled and retrieved from the pool will be logged to the standard output, as well as pool sizing information. Echoing can also be achieved by enabling logging for the sqlalchemy.pool namespace. Defaults to False. use_threadlocal If set to True, repeated calls to connect() within the same application thread will be guaranteed to return the same connection object, if one has already been retrieved from the pool and has not been returned yet. Offers a slight performance advantage at the cost of individual transactions by default. The unique_connection() method is provided to bypass the threadlocal behavior installed into connect(). reset_on_return If true, reset the database state of connections returned to the pool. This is typically a ROLLBACK to release locks and transaction resources. Disable at your own peril. Defaults to True. listeners A list of PoolListener-like objects or dictionaries of callables that receive events when DB-API connections are created, checked out and checked in to the pool. class sqlalchemy.pool.SingletonThreadPool(creator, pool_size=5, **kw) Bases: sqlalchemy.pool.Pool A Pool that maintains one connection per thread. Maintains one connection per each thread, never moving a connection to a thread other than the one which it was created in. Options are the same as those of Pool, as well as: Parameters pool_size The number of threads in which to maintain connections at once. Defaults to ve. SingletonThreadPool is used by the SQLite dialect automatically when a memory-based database is used. See SQLite. __init__(creator, pool_size=5, **kw) class sqlalchemy.pool.AssertionPool(*args, **kw) Bases: sqlalchemy.pool.Pool A Pool that allows at most one checked out connection at any given time. This will raise an exception if more than one connection is checked out at a time. Useful for debugging code that is using more connections than desired. AssertionPool also logs a traceback of where the original connection was checked out, and reports this in the assertion error raised (new in 0.7). class sqlalchemy.pool.NullPool(creator, recycle=-1, echo=None, use_threadlocal=False, logging_name=None, reset_on_return=True, listeners=None, events=None, _dispatch=None) Bases: sqlalchemy.pool.Pool A Pool which does not pool connections. Instead it literally opens and closes the underlying DB-API connection per each connection open/close. Reconnect-related functions such as recycle and connection invalidation are not supported by this Pool implementation, since no connections are held persistently. NullPool is used by the SQlite dilalect automatically when a le-based database is used (as of SQLAlchemy 0.7). See SQLite.
341
class sqlalchemy.pool.StaticPool(creator, recycle=-1, echo=None, use_threadlocal=False, logging_name=None, reset_on_return=True, listeners=None, events=None, _dispatch=None) Bases: sqlalchemy.pool.Pool A Pool of exactly one connection, used for all requests. Reconnect-related functions such as recycle and connection invalidation (which is also used to support autoreconnect) are not currently supported by this Pool implementation but may be implemented in a future release.
343
invoice invoice_item In most cases, individual Table objects have been explicitly declared, and these objects are typically accessed directly as module-level variables in an application. Once a Table has been dened, it has a full set of accessors which allow inspection of its properties. Given the following Table denition: employees = Table(employees, metadata, Column(employee_id, Integer, primary_key=True), Column(employee_name, String(60), nullable=False), Column(employee_dept, Integer, ForeignKey("departments.department_id")) ) Note the ForeignKey object used in this table - this construct denes a reference to a remote table, and is fully described in Dening Foreign Keys. Methods of accessing information about this table include: # access the column "EMPLOYEE_ID": employees.columns.employee_id # or just employees.c.employee_id # via string employees.c[employee_id] # iterate through all columns for c in employees.c: print c # get the tables primary key columns for primary_key in employees.primary_key: print primary_key # get the tables foreign key objects: for fkey in employees.foreign_keys: print fkey # access the tables MetaData: employees.metadata # access the tables bound Engine or Connection, if its MetaData is bound: employees.bind # access a columns name, type, nullable, primary key, foreign key employees.c.employee_id.name employees.c.employee_id.type employees.c.employee_id.nullable employees.c.employee_id.primary_key employees.c.employee_dept.foreign_keys # get the "key" of a column, which defaults to its name, but can # be any user-defined string: employees.c.employee_name.key # access a columns table: employees.c.employee_id.table is employees
344
# get the table related by a foreign key list(employees.c.employee_dept.foreign_keys)[0].column.table Creating and Dropping Database Tables Once youve dened some Table objects, assuming youre working with a brand new database one thing you might want to do is issue CREATE statements for those tables and their related constructs (as an aside, its also quite possible that you dont want to do this, if you already have some preferred methodology such as tools included with your database or an existing scripting system - if thats the case, feel free to skip this section - SQLAlchemy has no requirement that it be used to create your tables). The usual way to issue CREATE is to use create_all() on the MetaData object. This method will issue queries that rst check for the existence of each individual table, and if not found will issue the CREATE statements: engine = create_engine(sqlite:///:memory:) metadata = MetaData() user = Table(user, metadata, Column(user_id, Integer, primary_key = True), Column(user_name, String(16), nullable = False), Column(email_address, String(60), key=email), Column(password, String(20), nullable = False) ) user_prefs = Table(user_prefs, metadata, Column(pref_id, Integer, primary_key=True), Column(user_id, Integer, ForeignKey("user.user_id"), nullable=False), Column(pref_name, String(40), nullable=False), Column(pref_value, String(100)) ) metadata.create_all(engine) PRAGMA table_info(user){} CREATE TABLE user( user_id INTEGER NOT NULL PRIMARY KEY, user_name VARCHAR(16) NOT NULL, email_address VARCHAR(60), password VARCHAR(20) NOT NULL ) PRAGMA table_info(user_prefs){} CREATE TABLE user_prefs( pref_id INTEGER NOT NULL PRIMARY KEY, user_id INTEGER NOT NULL REFERENCES user(user_id), pref_name VARCHAR(40) NOT NULL, pref_value VARCHAR(100) ) create_all() creates foreign key constraints between tables usually inline with the table denition itself, and for this reason it also generates the tables in order of their dependency. There are options to change this behavior such that ALTER TABLE is used instead. Dropping all tables is similarly achieved using the drop_all() method. This method does the exact opposite of create_all() - the presence of each table is checked rst, and tables are dropped in reverse order of dependency.
345
Creating and dropping individual tables can be done via the create() and drop() methods of Table. These methods by default issue the CREATE or DROP regardless of the table being present: engine = create_engine(sqlite:///:memory:) meta = MetaData() employees = Table(employees, meta, Column(employee_id, Integer, primary_key=True), Column(employee_name, String(60), nullable=False, key=name), Column(employee_dept, Integer, ForeignKey("departments.department_id")) ) employees.create(engine) CREATE TABLE employees( employee_id SERIAL NOT NULL PRIMARY KEY, employee_name VARCHAR(60) NOT NULL, employee_dept INTEGER REFERENCES departments(department_id) ) {} drop() method: employees.drop(engine) DROP TABLE employees {} To enable the check rst for the table existing logic, add the checkfirst=True argument to create() or drop(): employees.create(engine, checkfirst=True) employees.drop(engine, checkfirst=False) Binding MetaData to an Engine or Connection Notice in the previous section the creator/dropper methods accept an argument for the database engine in use. When a schema construct is combined with an Engine object, or an individual Connection object, we call this the bind. In the above examples the bind is associated with the schema construct only for the duration of the operation. However, the option exists to persistently associate a bind with a set of schema constructs via the MetaData objects bind attribute: engine = create_engine(sqlite://) # create MetaData meta = MetaData() # bind to an engine meta.bind = engine We can now call methods like create_all() without needing to pass the Engine: meta.create_all() The MetaDatas bind is used for anything that requires an active connection, such as loading the denition of a table from the database automatically (called reection): # describe a table called users, query the database for its columns users_table = Table(users, meta, autoload=True) As well as for executing SQL constructs that are derived from that MetaDatas table objects: 346 Chapter 3. SQLAlchemy Core
# generate a SELECT statement and execute result = users_table.select().execute() Binding the MetaData to the Engine is a completely optional feature. The above operations can be achieved without the persistent bind using parameters: # describe a table called users, query the database for its columns users_table = Table(users, meta, autoload=True, autoload_with=engine) # generate a SELECT statement and execute result = engine.execute(users_table.select()) Should you use bind ? Its probably best to start without it, and wait for a specic need to arise. Bind is useful if: You arent using the ORM, are usually using connectionless execution, and nd yourself constantly needing to specify the same Engine object throughout the entire application. Bind can be used here to provide implicit execution. Your application has multiple schemas that correspond to different engines. Using one MetaData for each schema, bound to each engine, provides a decent place to delineate between the schemas. The ORM will also integrate with this approach, where the Session will naturally use the engine that is bound to each table via its metadata (provided the Session itself has no bind congured.). Alternatively, the bind attribute of MetaData is confusing if: Your application talks to multiple database engines at different times, which use the same set of Table objects. Its usually confusing and unnecessary to begin to create copies of Table objects just so that different engines can be used for different operations. An example is an application that writes data to a master database while performing read-only operations from a read slave. A global MetaData object is not appropriate for perrequest switching like this, although a ThreadLocalMetaData object is. You are using the ORM Session to handle which class/table is bound to which engine, or you are using the Session to manage switching between engines. Its a good idea to keep the binding of tables to engines in one place - either using MetaData only (the Session can of course be present, it just has no bind congured), or using Session only (the bind attribute of MetaData is left empty). Specifying the Schema Name Some databases support the concept of multiple schemas. A Table can reference this by specifying the schema keyword argument: financial_info = Table(financial_info, meta, Column(id, Integer, primary_key=True), Column(value, String(100), nullable=False), schema=remote_banks ) Within the MetaData collection, this table will be identied by the combination of financial_info and remote_banks. If another table called financial_info is referenced without the remote_banks schema, it will refer to a different Table. ForeignKey objects can specify references to columns in this table using the form remote_banks.financial_info.id. The schema argument should be used for any name qualiers required, including Oracles owner attribute and similar. It also can accommodate a dotted name for longer schemes: schema="dbo.scott"
347
Backend-Specic Options Table supports database-specic options. For example, MySQL has different table backend types, including MyISAM and InnoDB. This can be expressed with Table using mysql_engine: addresses = Table(engine_email_addresses, meta, Column(address_id, Integer, primary_key = True), Column(remote_user_id, Integer, ForeignKey(users.c.user_id)), Column(email_address, String(20)), mysql_engine=InnoDB ) Other backends may support table-level options as well - these would be described in the individual documentation sections for each dialect. Column, Table, MetaData API class sqlalchemy.schema.Column(*args, **kwargs) Bases: sqlalchemy.schema.SchemaItem, sqlalchemy.sql.expression.ColumnClause Represents a column in a database table. __init__(*args, **kwargs) Construct a new Column object. Parameters name The name of this column as represented in the database. This argument may be the rst positional argument, or specied via keyword. Names which contain no upper case characters will be treated as case insensitive names, and will not be quoted unless they are a reserved word. Names with any number of upper case characters will be quoted and sent exactly. Note that this behavior applies even for databases which standardize upper case names as case insensitive such as Oracle. The name eld may be omitted at construction time and applied later, at any time before the Column is associated with a Table. This is to support convenient usage within the declarative extension. type_ The columns type, indicated using an instance which subclasses TypeEngine. If no arguments are required for the type, the class of the type can be sent as well, e.g.: # use a type with arguments Column(data, String(50)) # use no arguments Column(level, Integer) The type argument may be the second positional argument or specied by keyword. There is partial support for automatic detection of the type based on that of a ForeignKey associated with this column, if the type is specied as None. However, this feature is not fully implemented and may not function in all cases. *args Additional positional arguments include various SchemaItem derived constructs which will be applied as options to the column. These include instances of Constraint, ForeignKey, ColumnDefault, and Sequence. In some cases an equivalent keyword argument is available such as server_default, default and unique.
348
autoincrement This ag may be set to False to indicate an integer primary key column that should not be considered to be the autoincrement column, that is the integer primary key column which generates values implicitly upon INSERT and whose value is usually returned via the DBAPI cursor.lastrowid attribute. It defaults to True to satisfy the common use case of a table with a single integer primary key column. If the table has a composite primary key consisting of more than one integer column, set this ag to True only on the column that should be considered autoincrement. The setting only has an effect for columns which are: Integer derived (i.e. INT, SMALLINT, BIGINT). Part of the primary key Are not referenced by any foreign keys have no server side or client side defaults (with the exception of Postgresql SERIAL). The setting has these two effects on columns that meet the above criteria: DDL issued for the column will include database-specic keywords intended to signify this column as an autoincrement column, such as AUTO INCREMENT on MySQL, SERIAL on Postgresql, and IDENTITY on MS-SQL. It does not issue AUTOINCREMENT for SQLite since this is a special SQLite ag that is not required for autoincrementing behavior. See the SQLite dialect documentation for information on SQLites AUTOINCREMENT. The column will be considered to be available as cursor.lastrowid or equivalent, for those dialects which post fetch newly inserted identiers after a row has been inserted (SQLite, MySQL, MS-SQL). It does not have any effect in this regard for databases that use sequences to generate primary key identiers (i.e. Firebird, Postgresql, Oracle). default A scalar, Python callable, or ClauseElement representing the default value for this column, which will be invoked upon insert if this column is otherwise not specied in the VALUES clause of the insert. This is a shortcut to using ColumnDefault as a positional argument. Contrast this argument to server_default which creates a default generator on the database side. doc optional String that can be used by the ORM or similar to document attributes. This attribute does not render SQL comments (a future attribute comment will achieve that). key An optional string identier which will identify this Column object on the Table. When a key is provided, this is the only identier referencing the Column within the application, including ORM attribute mapping; the name eld is used only when rendering SQL. index When True, indicates that the column is indexed. This is a shortcut for using a Index construct on the table. To specify indexes with explicit names or indexes that contain multiple columns, use the Index construct instead. info A dictionary which defaults to {}. A space to store application specic data. This must be a dictionary. nullable If set to the default of True, indicates the column will be rendered as allowing NULL, else its rendered as NOT NULL. This parameter is only used when issuing CREATE TABLE statements. onupdate A scalar, Python callable, or ClauseElement representing a default value to be applied to the column within UPDATE statements, which wil be invoked upon update
349
if this column is not present in the SET clause of the update. This is a shortcut to using ColumnDefault as a positional argument with for_update=True. primary_key If True, marks this column as a primary key column. Multiple columns can have this ag set to specify composite primary keys. As an alternative, the primary key of a Table can be specied via an explicit PrimaryKeyConstraint object. server_default A FetchedValue instance, str, Unicode or text() construct representing the DDL DEFAULT value for the column. String types will be emitted as-is, surrounded by single quotes: Column(x, Text, server_default="val") x TEXT DEFAULT val A text() expression will be rendered as-is, without quotes: Column(y, DateTime, server_default=text(NOW()))0 y DATETIME DEFAULT NOW() Strings and text() will be converted into a DefaultClause object upon initialization. Use FetchedValue to indicate that an already-existing column will generate a default value on the database side which will be available to SQLAlchemy for post-fetch after inserts. This construct does not specify any DDL and the implementation is left to the database, such as via a trigger. server_onupdate A FetchedValue instance representing a database-side default generation function. This indicates to SQLAlchemy that a newly generated value will be available after updates. This construct does not specify any DDL and the implementation is left to the database, such as via a trigger. quote Force quoting of this columns name on or off, corresponding to True or False. When left at its default of None, the column identier will be quoted according to whether the name is case sensitive (identiers with at least one upper case character are treated as case sensitive), or if its a reserved word. This ag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect. unique When True, indicates that this column contains a unique constraint, or if index is True as well, indicates that the Index should be created with the unique ag. To specify multiple columns in the constraint/index or to specify an explicit name, use the UniqueConstraint or Index constructs explicitly. append_foreign_key(fk) copy(**kw) Create a copy of this Column, unitialized. This is used in Table.tometadata. get_children(schema_visitor=False, **kwargs) references(column) Return True if this Column references the given column via foreign key. class sqlalchemy.schema.MetaData(bind=None, reect=False) Bases: sqlalchemy.schema.SchemaItem A collection of Table objects and their associated schema constructs.
350
Holds a collection of Table objects as well as an optional binding to an Engine or Connection. If bound, the Table objects in the collection and their columns may participate in implicit SQL execution. The Table objects themselves are stored in the metadata.tables dictionary. The bind property may be assigned to dynamically. A common pattern is to start unbound and then bind later when an engine is available: metadata = MetaData() # define tables Table(mytable, metadata, ...) # connect to an engine later, perhaps after loading a URL from a # configuration file metadata.bind = an_engine MetaData is a thread-safe object after tables have been explicitly dened or loaded via reection. See also: Describing Databases with MetaData - Introduction to database metadata Binding MetaData to an Engine or Connection - Information on binding connectables to MetaData __init__(bind=None, reect=False) Create a new MetaData object. Parameters bind An Engine or Connection to bind to. May also be a string or URL instance, these are passed to create_engine() and this MetaData will be bound to the resulting engine. reect Optional, automatically load all tables from the bound database. Defaults to False. bind is required when this option is set. For ner control over loaded tables, use the reflect method of MetaData. append_ddl_listener(event_name, listener) Append a DDL event listener to this MetaData. Deprecated. See DDLEvents. bind An Engine or Connection to which this MetaData is bound. This property may be assigned an Engine or Connection, or assigned a string or URL to automatically create a basic Engine for this bind with create_engine(). clear() Clear all Table objects from this MetaData. create_all(bind=None, tables=None, checkrst=True) Create all tables stored in this metadata. Conditional by default, will not attempt to recreate tables already present in the target database. Parameters bind A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any. tables Optional list of Table objects, which is a subset of the total tables in the MetaData (others are ignored). checkrst Defaults to True, dont issue CREATEs for tables already present in the target database.
351
drop_all(bind=None, tables=None, checkrst=True) Drop all tables stored in this metadata. Conditional by default, will not attempt to drop tables not present in the target database. Parameters bind A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any. tables Optional list of Table objects, which is a subset of the total tables in the MetaData (others are ignored). checkrst Defaults to True, only issue DROPs for tables conrmed to be present in the target database. is_bound() True if this MetaData is bound to an Engine or Connection. reflect(bind=None, schema=None, views=False, only=None) Load all available table denitions from the database. Automatically creates Table entries in this MetaData for any table available in the database but not yet present in the MetaData. May be called multiple times to pick up tables recently added to the database, however no special action is taken if a table in this MetaData no longer exists in the database. Parameters bind A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any. schema Optional, query and reect tables from an alterate schema. views If True, also reect views. only Optional. Load only a sub-set of available named tables. May be specied as a sequence of names or a callable. If a sequence of names is provided, only those tables will be reected. An error is raised if a table is requested but not available. Named tables already present in this MetaData are ignored. If a callable is provided, it will be used as a boolean predicate to lter the list of potential table names. The callable is called with a table name and this MetaData instance as positional arguments and should return a true value for any table to reect. remove(table) Remove the given Table object from this MetaData. sorted_tables Returns a list of Table objects sorted in order of dependency. class sqlalchemy.schema.SchemaItem Bases: sqlalchemy.events.SchemaEventTarget, sqlalchemy.sql.visitors.Visitable Base class for items that dene a database schema. class sqlalchemy.schema.Table(*args, **kw) Bases: sqlalchemy.schema.SchemaItem, sqlalchemy.sql.expression.TableClause Represent a table in a database. e.g.:
352
mytable = Table("mytable", metadata, Column(mytable_id, Integer, primary_key=True), Column(value, String(50)) ) The Table object constructs a unique instance of itself based on its name and optional schema name within the given MetaData object. Calling the Table constructor with the same name and same MetaData argument a second time will return the same Table object - in this way the Table constructor acts as a registry function. See also: Describing Databases with MetaData - Introduction to database metadata Constructor arguments are as follows: Parameters name The name of this table as represented in the database. This property, along with the schema, indicates the singleton identity of this table in relation to its parent MetaData. Additional calls to Table with the same name, metadata, and schema name will return the same Table object. Names which contain no upper case characters will be treated as case insensitive names, and will not be quoted unless they are a reserved word. Names with any number of upper case characters will be quoted and sent exactly. Note that this behavior applies even for databases which standardize upper case names as case insensitive such as Oracle. metadata a MetaData object which will contain this table. The metadata is used as a point of association of this table with other tables which are referenced via foreign key. It also may be used to associate this table with a particular Connectable. *args Additional positional arguments are used primarily to add the list of Column objects contained within this table. Similar to the style of a CREATE TABLE statement, other SchemaItem constructs may be added here, including PrimaryKeyConstraint, and ForeignKeyConstraint. autoload Defaults to False: the Columns for this table should be reected from the database. Usually there will be no Column objects in the constructor if this property is set. autoload_with If autoload==True, this is an optional Engine or Connection instance to be used for the table reection. If None, the underlying MetaDatas bound connectable will be used. extend_existing When True, indicates that if this Table is already present in the given MetaData, apply further arguments within the constructor to the existing Table. If extend_existing or keep_existing are not set, an error is raised if additional table modiers are specied when the given Table is already present in the MetaData. implicit_returning True by default - indicates that RETURNING can be used by default to fetch newly inserted primary key values, for backends which support this. Note that create_engine() also provides an implicit_returning ag. include_columns A list of strings indicating a subset of columns to be loaded via the autoload operation; table columns who arent present in this list will not be represented on the resulting Table object. Defaults to None which indicates all columns should be reected. info A dictionary which defaults to {}. A space to store application specic data. This must be a dictionary.
353
keep_existing When True, indicates that if this Table is already present in the given MetaData, ignore further arguments within the constructor to the existing Table, and return the Table object as originally created. This is to allow a function that wishes to dene a new Table on rst call, but on subsequent calls will return the same Table, without any of the declarations (particularly constraints) being applied a second time. Also see extend_existing. If extend_existing or keep_existing are not set, an error is raised if additional table modiers are specied when the given Table is already present in the MetaData. listeners A list of tuples of the form (<eventname>, <fn>) which will be passed to event.listen() upon construction. This alternate hook to event.listen() allows the establishment of a listener function specic to this Table before the autoload process begins. Particularly useful for the events.column_reflect() event: def listen_for_reflect(table, column_info): "handle the column reflection event" # ... t = Table( sometable, autoload=True, listeners=[ (column_reflect, listen_for_reflect) ]) mustexist When True, indicates that this Table must already be present in the given MetaData collection, else an exception is raised. prexes A list of strings to insert after CREATE in the CREATE TABLE statement. They will be separated by spaces. quote Force quoting of this tables name on or off, corresponding to True or False. When left at its default of None, the column identier will be quoted according to whether the name is case sensitive (identiers with at least one upper case character are treated as case sensitive), or if its a reserved word. This ag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect. quote_schema same as quote but applies to the schema identier. schema The schema name for this table, which is required if the table resides in a schema other than the default selected schema for the engines database connection. Defaults to None. useexisting Deprecated. Use extend_existing. __init__(*args, **kw) Constructor for Table. This method is a no-op. See the top-level documentation for Table for constructor arguments. add_is_dependent_on(table) Add a dependency for this Table. This is another Table object which must be created rst before this one can, or dropped after this one. Usually, dependencies between tables are determined via ForeignKey objects. However, for other situations that create dependencies outside of foreign keys (rules, inheriting), this method can manually establish such a link.
354
append_column(column) Append a Column to this Table. The key of the newly added Column, i.e. the value of its .key attribute, will then be available in the .c collection of this Table, and the column denition will be included in any CREATE TABLE, SELECT, UPDATE, etc. statements generated from this Table construct. Note that this does not change the denition of the table as it exists within any underlying database, assuming that table has already been created in the database. Relational databases support the addition of columns to existing tables using the SQL ALTER command, which would need to be emitted for an already-existing table that doesnt contain the newly added column. append_constraint(constraint) Append a Constraint to this Table. This has the effect of the constraint being included in any future CREATE TABLE statement, assuming specic DDL creation events have not been associated with the given Constraint object. Note that this does not produce the constraint within the relational database automatically, for a table that already exists in the database. To add a constraint to an existing relational database table, the SQL ALTER command must be used. SQLAlchemy also provides the AddConstraint construct which can produce this SQL when invoked as an executable clause. append_ddl_listener(event_name, listener) Append a DDL event listener to this Table. Deprecated. See DDLEvents. bind Return the connectable associated with this Table. create(bind=None, checkrst=False) Issue a CREATE statement for this Table, using the given Connectable for connectivity. See also MetaData.create_all(). drop(bind=None, checkrst=False) Issue a DROP statement for this Table, using the given Connectable for connectivity. See also MetaData.drop_all(). exists(bind=None) Return True if this table exists. get_children(column_collections=True, schema_visitor=False, **kw) key tometadata(metadata, schema=<symbol retain_schema>) Return a copy of this Table associated with a different MetaData. E.g.: # create two metadata meta1 = MetaData(sqlite:///querytest.db) meta2 = MetaData() # load users from the sqlite engine users_table = Table(users, meta1, autoload=True) # create the same Table object for the plain metadata users_table_2 = users_table.tometadata(meta2)
355
class sqlalchemy.schema.ThreadLocalMetaData Bases: sqlalchemy.schema.MetaData A MetaData variant that presents a different bind in every thread. Makes the bind property of the MetaData a thread-local value, allowing this collection of tables to be bound to different Engine implementations or connections in each thread. The ThreadLocalMetaData starts off bound to None in each thread. Binds must be made explicitly by assigning to the bind property or using connect(). You can also re-bind dynamically multiple times per thread, just like a regular MetaData. __init__() Construct a ThreadLocalMetaData. bind The bound Engine or Connection for this thread. This property may be assigned an Engine or Connection, or assigned a string or URL to automatically create a basic Engine for this bind with create_engine(). dispose() Dispose all bound engines, in all thread contexts. is_bound() True if there is a bind for this thread.
>>> shopping_cart_items = Table(shopping_cart_items, meta, autoload=True, autoload_with=e >>> shopping_carts in meta.tables: True The MetaData has an interesting singleton-like behavior such that if you requested both tables individually, MetaData will ensure that exactly one Table object is created for each distinct table name. The Table constructor actually returns to you the already-existing Table object if one already exists with the given name. Such as below, we can access the already generated shopping_carts table just by naming it: shopping_carts = Table(shopping_carts, meta) Of course, its a good idea to use autoload=True with the above table regardless. This is so that the tables attributes will be loaded if they have not been already. The autoload operation only occurs for the table if it hasnt already been loaded; once loaded, new calls to Table with the same name will not re-issue any reection queries. 356 Chapter 3. SQLAlchemy Core
Overriding Reected Columns Individual columns can be overridden with explicit values when reecting tables; this is handy for specifying custom datatypes, constraints such as primary keys that may not be congured within the database, etc.: >>> ... ... ...
mytable = Table(mytable, meta, Column(id, Integer, primary_key=True), # override reflected id to have primary ke Column(mydata, Unicode(50)), # override reflected mydata to be Unicode autoload=True)
Reecting Views The reection system can also reect views. Basic usage is the same as that of a table: my_view = Table("some_view", metadata, autoload=True) Above, my_view is a Table object with Column objects representing the names and types of each column within the view some_view. Usually, its desired to have at least a primary key constraint when reecting a view, if not foreign keys as well. View reection doesnt extrapolate these constraints. Use the override technique for this, specifying explicitly those columns which are part of the primary key or have foreign key constraints: my_view = Table("some_view", metadata, Column("view_id", Integer, primary_key=True), Column("related_thing", Integer, ForeignKey("othertable.thing_id")), autoload=True ) Reecting All Tables at Once The MetaData object can also get a listing of tables and reect the full set. This is achieved by using the reflect() method. After calling it, all located tables are present within the MetaData objects dictionary of tables: meta = MetaData() meta.reflect(bind=someengine) users_table = meta.tables[users] addresses_table = meta.tables[addresses] metadata.reflect() also provides a handy way to clear or delete all the rows in a database: meta = MetaData() meta.reflect(bind=someengine) for table in reversed(meta.sorted_tables): someengine.execute(table.delete()) Fine Grained Reection with Inspector A low level interface which provides a backend-agnostic system of loading lists of schema, table, column, and constraint descriptions from a given database is also available. This is known as the Inspector: from sqlalchemy import create_engine from sqlalchemy.engine import reflection engine = create_engine(...)
357
insp = reflection.Inspector.from_engine(engine) print insp.get_table_names() class sqlalchemy.engine.reflection.Inspector(bind) Bases: object Performs database schema inspection. The Inspector acts as a proxy to the reection methods of the Dialect, providing a consistent interface as well as caching support for previously fetched metadata. The preferred method to construct an Inspector is via the Inspector.from_engine() method. I.e.: engine = create_engine(...) insp = Inspector.from_engine(engine) Where above, the Dialect may opt to return an Inspector subclass that provides additional methods specic to the dialects target database. __init__(bind) Initialize a new Inspector. Parameters bind a Connectable, which is typically an instance of Engine or Connection. For a dialect-specic instance of Inspector, see Inspector.from_engine() default_schema_name Return the default schema name presented by the dialect for the current engines database user. E.g. this is typically public for Postgresql and dbo for SQL Server. classmethod from_engine(bind) Construct a new dialect-specic Inspector object from the given engine or connection. Parameters bind a Connectable, which is typically an instance of Engine or Connection. This method differs from direct a direct constructor call of Inspector in that the Dialect is given a chance to provide a dialect-specic Inspector instance, which may provide additional methods. See the example at Inspector. get_columns(table_name, schema=None, **kw) Return information about columns in table_name. Given a string table_name and an optional string schema, return column information as a list of dicts with these keys: name the columns name type TypeEngine nullable boolean default the columns default value attrs dict containing optional column attributes get_foreign_keys(table_name, schema=None, **kw) Return information about foreign_keys in table_name. Given a string table_name, and an optional string schema, return foreign key information as a list of dicts with these keys:
358
constrained_columns a list of column names that make up the foreign key referred_schema the name of the referred schema referred_table the name of the referred table referred_columns a list of column names in the referred table that correspond to constrained_columns name optional name of the foreign key constraint. **kw other options passed to the dialects get_foreign_keys() method. get_indexes(table_name, schema=None, **kw) Return information about indexes in table_name. Given a string table_name and an optional string schema, return index information as a list of dicts with these keys: name the indexs name column_names list of column names in order unique boolean **kw other options passed to the dialects get_indexes() method. get_pk_constraint(table_name, schema=None, **kw) Return information about primary key constraint on table_name. Given a string table_name, and an optional string schema, return primary key information as a dictionary with these keys: constrained_columns a list of column names that make up the primary key name optional name of the primary key constraint. get_primary_keys(table_name, schema=None, **kw) Return information about primary keys in table_name. Given a string table_name, and an optional string schema, return primary key information as a list of column names. get_schema_names() Return all schema names. get_table_names(schema=None, order_by=None) Return all table names in schema. Parameters schema Optional, retrieve names from a non-default schema. order_by Optional, may be the string foreign_key to sort the result on foreign key dependencies. This should probably not return view names or maybe it should return them with an indicator t or v. get_table_options(table_name, schema=None, **kw) Return a dictionary of options specied when the table of the given name was created. This currently includes some options that apply to MySQL tables. get_view_definition(view_name, schema=None) Return denition for view_name. Parameters schema Optional, retrieve names from a non-default schema.
359
get_view_names(schema=None) Return all view names in schema. Parameters schema Optional, retrieve names from a non-default schema. reflecttable(table, include_columns) Given a Table object, load its internal constructs based on introspection. This is the underlying method used by most dialects to produce table reection. Direct usage is like: from sqlalchemy import create_engine, MetaData, Table from sqlalchemy.engine import reflection engine = create_engine(...) meta = MetaData() user_table = Table(user, meta) insp = Inspector.from_engine(engine) insp.reflecttable(user_table, None) Parameters table a Table instance. include_columns a list of string column names to include in the reection process. If None, all columns are reected.
360
Table("mytable", meta, Column("somecolumn", Integer, onupdate=25) ) Python-Executed Functions The default and onupdate keyword arguments also accept Python functions. These functions are invoked at the time of insert or update if no other value for that column is supplied, and the value returned is used for the columns value. Below illustrates a crude sequence that assigns an incrementing counter to a primary key column: # a function which counts upwards i = 0 def mydefault(): global i i += 1 return i t = Table("mytable", meta, Column(id, Integer, primary_key=True, default=mydefault), ) It should be noted that for real incrementing sequence behavior, the built-in capabilities of the database should normally be used, which may include sequence objects or other autoincrementing capabilities. For primary key columns, SQLAlchemy will in most cases use these capabilities automatically. See the API documentation for Column including the autoincrement ag, as well as the section on Sequence later in this chapter for background on standard primary key generation techniques. To illustrate onupdate, we assign the Python datetime function now to the onupdate attribute: import datetime t = Table("mytable", meta, Column(id, Integer, primary_key=True), # define last_updated to be populated with datetime.now() Column(last_updated, DateTime, onupdate=datetime.datetime.now), ) When an update statement executes and no value is passed for last_updated, the datetime.datetime.now() Python function is executed and its return value used as the value for last_updated. Notice that we provide now as the function itself without calling it (i.e. there are no parenthesis following) - SQLAlchemy will execute the function at the time the statement executes.
361
Column(counter, Integer), Column(counter_plus_twelve, Integer, default=mydefault, onupdate=mydefault) ) Above we illustrate a default function which will execute for all INSERT and UPDATE statements where a value for counter_plus_twelve was otherwise not provided, and the value will be that of whatever value is present in the execution for the counter column, plus the number 12. While the context object passed to the default function has many attributes, the current_parameters member is a special member provided only during the execution of a default function for the purposes of deriving defaults from its existing values. For a single statement that is executing many sets of bind parameters, the user-dened function is called for each set of parameters, and current_parameters will be provided with each individual parameter set for each execution. SQL Expressions The default and onupdate keywords may also be passed SQL expressions, including select statements or direct function calls: t = Table("mytable", meta, Column(id, Integer, primary_key=True), # define create_date to default to now() Column(create_date, DateTime, default=func.now()),
# define key to pull its default from the keyvalues table Column(key, String(20), default=keyvalues.select(keyvalues.c.type=type1, limit=1)), # define last_modified to use the current_timestamp SQL function on update Column(last_modified, DateTime, onupdate=func.utc_timestamp()) ) Above, the create_date column will be populated with the result of the now() SQL function (which, depending on backend, compiles into NOW() or CURRENT_TIMESTAMP in most cases) during an INSERT statement, and the key column with the result of a SELECT subquery from another table. The last_modified column will be populated with the value of UTC_TIMESTAMP(), a function specic to MySQL, when an UPDATE statement is emitted for this table. Note that when using func functions, unlike when using Python datetime functions we do call the function, i.e. with parenthesis () - this is because what we want in this case is the return value of the function, which is the SQL expression construct that will be rendered into the INSERT or UPDATE statement. The above SQL functions are usually executed inline with the INSERT or UPDATE statement being executed, meaning, a single statement is executed which embeds the given expressions or subqueries within the VALUES or SET clause of the statement. Although in some cases, the function is pre-executed in a SELECT statement of its own beforehand. This happens when all of the following is true: the column is a primary key column the database dialect does not support a usable cursor.lastrowid accessor (or equivalent); this currently includes PostgreSQL, Oracle, and Firebird, as well as some MySQL dialects. the dialect does not support the RETURNING clause or similar, or the implicit_returning ag is set to False for the dialect. Dialects which support RETURNING currently include Postgresql, Oracle, Firebird, and MS-SQL. the statement is a single execution, i.e. only supplies one set of parameters and doesnt use executemany behavior
362
the inline=True ag is not set on the Insert() or Update() construct, and the statement has not dened an explicit returning() clause. Whether or not the default generation clause pre-executes is not something that normally needs to be considered, unless it is being addressed for performance reasons. When the statement is executed with a single set of parameters (that is, it is not an executemany style execution), the returned ResultProxy will contain a collection accessible via result.postfetch_cols() which contains a list of all Column objects which had an inline-executed default. Similarly, all parameters which were bound to the statement, including all Python and SQL expressions which were pre-executed, are present in the last_inserted_params() or last_updated_params() collections on ResultProxy. The inserted_primary_key collection contains a list of primary key values for the row inserted (a list so that singlecolumn and composite-column primary keys are represented in the same format). Server Side Defaults A variant on the SQL expression default is the server_default, which gets placed in the CREATE TABLE statement during a create() operation: t = Table(test, meta, Column(abc, String(20), server_default=abc), Column(created_at, DateTime, server_default=text("sysdate")) ) A create call for the above table will produce: CREATE TABLE test ( abc varchar(20) default abc, created_at datetime default sysdate ) The behavior of server_default is similar to that of a regular SQL default; if its placed on a primary key column for a database which doesnt have a way to postfetch the ID, and the statement is not inlined, the SQL expression is pre-executed; otherwise, SQLAlchemy lets the default re off on the database side normally. Triggered Columns Columns with values set by a database trigger or other external process may be called out with a marker: t = Table(test, meta, Column(abc, String(20), server_default=FetchedValue()), Column(def, String(20), server_onupdate=FetchedValue()) ) These markers do not emit a default clause when the table is created, however they do set the same internal ags as a static server_default clause, providing hints to higher-level tools that a post-fetch of these rows should be performed after an insert or update. Dening Sequences SQLAlchemy represents database sequences using the Sequence object, which is considered to be a special case of column default. It only has an effect on databases which have explicit support for sequences, which currently includes Postgresql, Oracle, and Firebird. The Sequence object is otherwise ignored.
363
The Sequence may be placed on any column as a default generator to be used during INSERT operations, and can also be congured to re off during UPDATE operations if desired. It is most commonly used in conjunction with a single integer primary key column: table = Table("cartitems", meta, Column("cart_id", Integer, Sequence(cart_id_seq), primary_key=True), Column("description", String(40)), Column("createdate", DateTime()) ) Where above, the table cartitems is associated with a sequence named cart_id_seq. When INSERT statements take place for cartitems, and no value is passed for the cart_id column, the cart_id_seq sequence will be used to generate a value. When the Sequence is associated with a table, CREATE and DROP statements issued for that table will also issue CREATE/DROP for the sequence object as well, thus bundling the sequence object with its parent table. The Sequence object also implements special functionality to accommodate Postgresqls SERIAL datatype. The SERIAL type in PG automatically generates a sequence that is used implicitly during inserts. This means that if a Table object denes a Sequence on its primary key column so that it works with Oracle and Firebird, the Sequence would get in the way of the implicit sequence that PG would normally use. For this use case, add the ag optional=True to the Sequence object - this indicates that the Sequence should only be used if the database provides no other option for generating primary key identiers. The Sequence object also has the ability to be executed standalone like a SQL expression, which has the effect of calling its next value function: seq = Sequence(some_sequence) nextid = connection.execute(seq) Default Objects API class sqlalchemy.schema.ColumnDefault(arg, **kwargs) Bases: sqlalchemy.schema.DefaultGenerator A plain default value on a column. This could correspond to a constant, a callable function, or a SQL clause. ColumnDefault is generated automatically whenever the default, onupdate arguments of Column are used. A ColumnDefault can be passed positionally as well. For example, the following: Column(foo, Integer, default=50) Is equivalent to: Column(foo, Integer, ColumnDefault(50)) class sqlalchemy.schema.DefaultClause(arg, for_update=False, _reected=False) Bases: sqlalchemy.schema.FetchedValue A DDL-specied DEFAULT column value. DefaultClause is a FetchedValue that also generates a DEFAULT clause when CREATE TABLE is emitted. DefaultClause is generated automatically whenever the server_default, server_onupdate arguments of Column are used. A DefaultClause can be passed positionally as well. 364 Chapter 3. SQLAlchemy Core
For example, the following: Column(foo, Integer, server_default="50") Is equivalent to: Column(foo, Integer, DefaultClause("50")) class sqlalchemy.schema.DefaultGenerator(for_update=False) Bases: sqlalchemy.schema._NotAColumnExpr, sqlalchemy.schema.SchemaItem Base class for column default values. class sqlalchemy.schema.FetchedValue(for_update=False) Bases: sqlalchemy.schema._NotAColumnExpr, sqlalchemy.events.SchemaEventTarget A marker for a transparent database-side default. Use FetchedValue when the database is congured to provide some automatic default for a column. E.g.: Column(foo, Integer, FetchedValue()) Would indicate that some trigger or default generator will create a new value for the foo column during an INSERT. class sqlalchemy.schema.PassiveDefault(*arg, **kw) Bases: sqlalchemy.schema.DefaultClause A DDL-specied DEFAULT column value. Deprecated since version 0.6: PassiveDefault is deprecated. Use DefaultClause. class sqlalchemy.schema.Sequence(name, start=None, increment=None, schema=None, optional=False, quote=None, metadata=None, for_update=False) Bases: sqlalchemy.schema.DefaultGenerator Represents a named database sequence. The Sequence object represents the name and congurational parameters of a database sequence. It also represents a construct that can be executed by a SQLAlchemy Engine or Connection, rendering the appropriate next value function for the target database and returning a result. The Sequence is typically associated with a primary key column: some_table = Table(some_table, metadata, Column(id, Integer, Sequence(some_table_seq), primary_key=True) ) When CREATE TABLE is emitted for the above Table, if the target platform supports sequences, a CREATE SEQUENCE statement will be emitted as well. For platforms that dont support sequences, the Sequence construct is ignored. See also: CreateSequence DropSequence __init__(name, start=None, increment=None, schema=None, optional=False, quote=None, metadata=None, for_update=False) Construct a Sequence object. Parameters name The name of the sequence. 3.6. Schema Denition Language 365
start the starting index of the sequence. This value is used when the CREATE SEQUENCE command is emitted to the database as the value of the START WITH clause. If None, the clause is omitted, which on most platforms indicates a starting value of 1. increment the increment value of the sequence. This value is used when the CREATE SEQUENCE command is emitted to the database as the value of the INCREMENT BY clause. If None, the clause is omitted, which on most platforms indicates an increment of 1. schema Optional schema name for the sequence, if located in a schema other than the default. optional boolean value, when True, indicates that this Sequence object only needs to be explicitly generated on backends that dont provide another way to generate primary key identiers. Currently, it essentially means, dont create this sequence on the Postgresql backend, where the SERIAL keyword creates a sequence for us automatically. quote boolean value, when True or False, explicitly forces quoting of the schema name on or off. When left at its default of None, normal quoting rules based on casing and reserved words take place. metadata optional MetaData object which will be associated with this Sequence. A Sequence that is associated with a MetaData gains access to the bind of that MetaData, meaning the Sequence.create() and Sequence.drop() methods will make usage of that engine automatically. Additionally, the appropriate CREATE SEQUENCE/ DROP SEQUENCE DDL commands will be emitted corresponding to this Sequence when MetaData.create_all() and MetaData.drop_all() are invoked (new in 0.7). Note that when a Sequence is applied to a Column, the Sequence is automatically associated with the MetaData object of that columns parent Table, when that association is made. The Sequence will then be subject to automatic CREATE SEQUENCE/DROP SEQUENCE corresponding to when the Table object itself is created or dropped, rather than that of the MetaData object overall. for_update Indicates this Sequence, when associated with a Column, should be invoked for UPDATE statements on that columns table, rather than for INSERT statements, when no value is otherwise present for that column in the statement. create(bind=None, checkrst=True) Creates this sequence in the database. drop(bind=None, checkrst=True) Drops this sequence from the database. next_value() Return a next_value function element which will render the appropriate increment function for this Sequence within any SQL expression.
366
exceptions to this. The foreign key is the joint that connects together pairs of rows which have a relationship with each other, and SQLAlchemy assigns very deep importance to this concept in virtually every area of its operation. In SQLAlchemy as well as in DDL, foreign key constraints can be dened as additional attributes within the table clause, or for single-column foreign keys they may optionally be specied within the denition of a single column. The single column foreign key is more common, and at the column level is specied by constructing a ForeignKey object as an argument to a Column object: user_preference = Table(user_preference, metadata, Column(pref_id, Integer, primary_key=True), Column(user_id, Integer, ForeignKey("user.user_id"), nullable=False), Column(pref_name, String(40), nullable=False), Column(pref_value, String(100)) ) Above, we dene a new table user_preference for which each row must contain a value in the user_id column that also exists in the user tables user_id column. The argument to ForeignKey is most commonly a string of the form <tablename>.<columnname>, or for a table in a remote schema or owner of the form <schemaname>.<tablename>.<columnname>. It may also be an actual Column object, which as well see later is accessed from an existing Table object via its c collection: ForeignKey(user.c.user_id) The advantage to using a string is that the in-python linkage between user and user_preference is resolved only when rst needed, so that table objects can be easily spread across multiple modules and dened in any order. Foreign keys may also be dened at the table level, using the ForeignKeyConstraint object. This object can describe a single- or multi-column foreign key. A multi-column foreign key is known as a composite foreign key, and almost always references a table that has a composite primary key. Below we dene a table invoice which has a composite primary key: invoice = Table(invoice, metadata, Column(invoice_id, Integer, primary_key=True), Column(ref_num, Integer, primary_key=True), Column(description, String(60), nullable=False) ) And then a table invoice_item with a composite foreign key referencing invoice:
invoice_item = Table(invoice_item, metadata, Column(item_id, Integer, primary_key=True), Column(item_name, String(60), nullable=False), Column(invoice_id, Integer, nullable=False), Column(ref_num, Integer, nullable=False), ForeignKeyConstraint([invoice_id, ref_num], [invoice.invoice_id, invoice.ref_num ) Its important to note that the ForeignKeyConstraint is the only way to dene a composite foreign key. While we could also have placed individual ForeignKey objects on both the invoice_item.invoice_id and invoice_item.ref_num columns, SQLAlchemy would not be aware that these two values should be paired together - it would be two individual foreign key constraints instead of a single composite foreign key referencing two columns.
367
ForeignKeyConstraint invokes the CONSTRAINT keyword inline with CREATE TABLE. There are some cases where this is undesireable, particularly when two tables reference each other mutually, each with a foreign key referencing the other. In such a situation at least one of the foreign key constraints must be generated after both tables have been built. To support such a scheme, ForeignKey and ForeignKeyConstraint offer the ag use_alter=True. When using this ag, the constraint will be generated using a denition similar to ALTER TABLE <tablename> ADD CONSTRAINT <name> .... Since a name is required, the name attribute must also be specied. For example: node = Table(node, meta, Column(node_id, Integer, primary_key=True), Column(primary_element, Integer, ForeignKey(element.element_id, use_alter=True, name=fk_node_element_id) ) ) element = Table(element, meta, Column(element_id, Integer, primary_key=True), Column(parent_node_id, Integer), ForeignKeyConstraint( [parent_node_id], [node.node_id], use_alter=True, name=fk_element_parent_node_id ) )
Note that these clauses are not supported on SQLite, and require InnoDB tables when used with MySQL. They may also not be supported on other databases. UNIQUE Constraint Unique constraints can be created anonymously on a single column using the unique keyword on Column. Explicitly named unique constraints and/or those with multiple columns are created via the UniqueConstraint table-level construct. meta = MetaData() mytable = Table(mytable, meta, # per-column anonymous unique constraint Column(col1, Integer, unique=True), Column(col2, Integer), Column(col3, Integer), # explicit/composite unique constraint. name is optional. UniqueConstraint(col2, col3, name=uix_1) ) CHECK Constraint Check constraints can be named or unnamed and can be created at the Column or Table level, using the CheckConstraint construct. The text of the check constraint is passed directly through to the database, so there is limited database independent behavior. Column level check constraints generally should only refer to the column to which they are placed, while table level constraints can refer to any columns in the table. Note that some databases do not actively support check constraints such as MySQL. meta = MetaData() mytable = Table(mytable, meta, # per-column CHECK constraint Column(col1, Integer, CheckConstraint(col1>5)), Column(col2, Integer), Column(col3, Integer), # table level CHECK constraint. name is optional. CheckConstraint(col2 > col3 + 5, name=check1) ) mytable.create(engine) CREATE TABLE mytable ( col1 INTEGER CHECK (col1>5), col2 INTEGER, col3 INTEGER, CONSTRAINT check1 CHECK (col2 > col3 + 5) )
369
Setting up Constraints when using the Declarative ORM Extension The Table is the SQLAlchemy Core construct that allows one to dene table metadata, which among other things can be used by the SQLAlchemy ORM as a target to map a class. The Declarative extension allows the Table object to be created automatically, given the contents of the table primarily as a mapping of Column objects. To apply table-level constraint objects such as ForeignKeyConstraint to a table dened using Declarative, use the __table_args__ attribute, described at Table Conguration. Constraints API class sqlalchemy.schema.Constraint(name=None, deferrable=None, ate_rule=None) Bases: sqlalchemy.schema.SchemaItem A table-level SQL constraint. class sqlalchemy.schema.CheckConstraint(sqltext, name=None, deferrable=None, tially=None, table=None, _create_rule=None) Bases: sqlalchemy.schema.Constraint A table- or column-level CHECK constraint. Can be included in the denition of a Table or Column. class sqlalchemy.schema.ColumnCollectionConstraint(*columns, **kw) Bases: sqlalchemy.schema.ColumnCollectionMixin, sqlalchemy.schema.Constraint A constraint that proxies a ColumnCollection. class sqlalchemy.schema.ForeignKey(column, _constraint=None, use_alter=False, name=None, onupdate=None, ondelete=None, deferrable=None, initially=None, link_to_name=False) Bases: sqlalchemy.schema.SchemaItem Denes a dependency between two columns. ForeignKey is specied as an argument to a Column object, e.g.: t = Table("remote_table", metadata, Column("remote_id", ForeignKey("main_table.id")) ) Note that ForeignKey is only a marker object that denes a dependency between two columns. The actual constraint is in all cases represented by the ForeignKeyConstraint object. This object will be generated automatically when a ForeignKey is associated with a Column which in turn is associated with a Table. Conversely, when ForeignKeyConstraint is applied to a Table, ForeignKey markers are automatically generated to be present on each associated Column, which are also associated with the constraint object. Note that you cannot dene a composite foreign key constraint, that is a constraint between a grouping of multiple parent/child columns, using ForeignKey objects. To dene this grouping, the ForeignKeyConstraint object must be used, and applied to the Table. The associated ForeignKey objects are created automatically. The ForeignKey objects associated with an individual Column object are available in the foreign_keys collection of that column. Further examples of foreign key conguration are in Dening Foreign Keys. iniinitially=None, _cre-
370
__init__(column, _constraint=None, use_alter=False, name=None, onupdate=None, delete=None, deferrable=None, initially=None, link_to_name=False) Construct a column-level FOREIGN KEY.
on-
The ForeignKey object when constructed generates a ForeignKeyConstraint which is associated with the parent Table objects collection of constraints. Parameters column A single target column for the key relationship. A Column object or a column name as a string: tablename.columnkey or schema.tablename.columnkey. columnkey is the key which has been assigned to the column (defaults to the column name itself), unless link_to_name is True in which case the rendered name of the column is used. name Optional string. An in-database name for the key if constraint is not provided. onupdate Optional string. If set, emit ON UPDATE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT. ondelete Optional string. If set, emit ON DELETE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT. deferrable Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint. initially Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint. link_to_name if True, the string name given in column is the rendered name of the referenced column, not its locally assigned key. use_alter passed to the underlying ForeignKeyConstraint to indicate the constraint should be generated/dropped externally from the CREATE TABLE/ DROP TABLE statement. See that classes constructor for details. column Return the target Column referenced by this ForeignKey. If this ForeignKey was created using a string-based target column specication, this attribute will on rst access initiate a resolution process to locate the referenced remote Column. The resolution process traverses to the parent Column, Table, and MetaData to proceed - if any of these arent yet present, an error is raised. copy(schema=None) Produce a copy of this ForeignKey object. The new ForeignKey will not be bound to any Column. This method is usually used by the internal copy procedures of Column, Table, and MetaData. Parameters schema The returned ForeignKey will reference the original table and column name, qualied by the given string schema name. get_referent(table) Return the Column in the given Table referenced by this ForeignKey. Returns None if this ForeignKey does not reference the given Table. references(table) Return True if the given Table is referenced by this ForeignKey. target_fullname Return a string based column specication for this ForeignKey.
371
This is usually the equivalent of the string-based tablename.colname argument rst passed to the objects constructor. class sqlalchemy.schema.ForeignKeyConstraint(columns, refcolumns, name=None, onupdate=None, ondelete=None, deferrable=None, initially=None, use_alter=False, link_to_name=False, table=None) Bases: sqlalchemy.schema.Constraint A table-level FOREIGN KEY constraint. Denes a single column or composite FOREIGN KEY ... REFERENCES constraint. For a no-frills, single column foreign key, adding a ForeignKey to the denition of a Column is a shorthand equivalent for an unnamed, single column ForeignKeyConstraint. Examples of foreign key conguration are in Dening Foreign Keys. __init__(columns, refcolumns, name=None, onupdate=None, ondelete=None, deferrable=None, initially=None, use_alter=False, link_to_name=False, table=None) Construct a composite-capable FOREIGN KEY. Parameters columns A sequence of local column names. The named columns must be dened and present in the parent Table. The names should match the key given to each column (defaults to the name) unless link_to_name is True. refcolumns A sequence of foreign column names or Column objects. The columns must all be located within the same Table. name Optional, the in-database name of the key. onupdate Optional string. If set, emit ON UPDATE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT. ondelete Optional string. If set, emit ON DELETE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT. deferrable Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint. initially Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint. link_to_name if True, the string name given in column is the rendered name of the referenced column, not its locally assigned key. use_alter If True, do not emit the DDL for this constraint as part of the CREATE TABLE denition. Instead, generate it via an ALTER TABLE statement issued after the full collection of tables have been created, and drop it via an ALTER TABLE statement before the full collection of tables are dropped. This is shorthand for the usage of AddConstraint and DropConstraint applied as after-create and before-drop events on the MetaData object. This is normally used to generate/drop constraints on objects that are mutually dependent on each other. class sqlalchemy.schema.PrimaryKeyConstraint(*columns, **kw) Bases: sqlalchemy.schema.ColumnCollectionConstraint A table-level PRIMARY KEY constraint. Denes a single column or composite PRIMARY KEY constraint. For a no-frills primary key, adding primary_key=True to one or more Column denitions is a shorthand equivalent for an unnamed single- or multiple-column PrimaryKeyConstraint.
372
class sqlalchemy.schema.UniqueConstraint(*columns, **kw) Bases: sqlalchemy.schema.ColumnCollectionConstraint A table-level UNIQUE constraint. Denes a single column or composite UNIQUE constraint. For a no-frills, single column constraint, adding unique=True to the Column denition is a shorthand equivalent for an unnamed, single column UniqueConstraint. Indexes Indexes can be created anonymously (using an auto-generated name ix_<column label>) for a single column using the inline index keyword on Column, which also modies the usage of unique to apply the uniqueness to the index itself, instead of adding a separate UNIQUE constraint. For indexes with specic names or which encompass more than one column, use the Index construct, which requires a name. Below we illustrate a Table with several Index objects associated. The DDL for CREATE INDEX is issued right after the create statements for the table: meta = MetaData() mytable = Table(mytable, meta, # an indexed column, with index "ix_mytable_col1" Column(col1, Integer, index=True), # a uniquely indexed column with index "ix_mytable_col2" Column(col2, Integer, index=True, unique=True), Column(col3, Integer), Column(col4, Integer), Column(col5, Integer), Column(col6, Integer), ) # place an index on col3, col4 Index(idx_col34, mytable.c.col3, mytable.c.col4) # place a unique index on col5, col6 Index(myindex, mytable.c.col5, mytable.c.col6, unique=True) mytable.create(engine) CREATE TABLE mytable ( col1 INTEGER, col2 INTEGER, col3 INTEGER, col4 INTEGER, col5 INTEGER, col6 INTEGER ) CREATE INDEX ix_mytable_col1 ON mytable (col1) CREATE UNIQUE INDEX ix_mytable_col2 ON mytable (col2) CREATE UNIQUE INDEX myindex ON mytable (col5, col6) CREATE INDEX idx_col34 ON mytable (col3, col4) Note in the example above, the Index construct is created externally to the table which it corresponds, using Column objects directly. As of SQLAlchemy 0.7, Index also supports inline denition inside the Table, using string 3.6. Schema Denition Language 373
names to identify columns: meta = MetaData() mytable = Table(mytable, meta, Column(col1, Integer), Column(col2, Integer), Column(col3, Integer), Column(col4, Integer), # place an index on col1, col2 Index(idx_col12, col1, col2), # place a unique index on col3, col4 Index(idx_col34, col3, col4, unique=True) ) The Index object also supports its own create() method: i = Index(someindex, mytable.c.col5) i.create(engine) CREATE INDEX someindex ON mytable (col5) class sqlalchemy.schema.Index(name, *columns, **kw) Bases: sqlalchemy.schema.ColumnCollectionMixin, sqlalchemy.schema.SchemaItem A table-level INDEX. Denes a composite (one or more column) INDEX. For a no-frills, single column index, adding index=True to the Column denition is a shorthand equivalent for an unnamed, single column Index. See also: Indexes - General information on Index. Postgresql-Specic Index Options - PostgreSQL-specic options available for the Index construct. MySQL Specic Index Options - MySQL-specic options available for the Index construct. __init__(name, *columns, **kw) Construct an index object. Parameters name The name of the index *columns Columns to include in the index. All columns must belong to the same table. unique Defaults to False: create a unique index. **kw Other keyword arguments may be interpreted by specic dialects. bind Return the connectable associated with this Index. create(bind=None) Issue a CREATE statement for this Index, using the given Connectable for connectivity. See also MetaData.create_all(). drop(bind=None) Issue a DROP statement for this Index, using the given Connectable for connectivity.
374
The above table contains a column user_name which is subject to a CHECK constraint that validates that the length of the string is at least eight characters. When a create() is issued for this table, DDL for the CheckConstraint will also be issued inline within the table denition. The CheckConstraint construct can also be constructed externally and associated with the Table afterwards:
375
constraint = CheckConstraint(length(user_name) >= 8,name="cst_user_name_length") users.append_constraint(constraint) So far, the effect is the same. However, if we create DDL elements corresponding to the creation and removal of this constraint, and associate them with the Table as events, these new events will take over the job of issuing DDL for the constraint. Additionally, the constraint will be added via ALTER: from sqlalchemy import event event.listen( users, "after_create", AddConstraint(constraint) ) event.listen( users, "before_drop", DropConstraint(constraint) ) users.create(engine) CREATE TABLE users ( user_id SERIAL NOT NULL, user_name VARCHAR(40) NOT NULL, PRIMARY KEY (user_id) ) ALTER TABLE users ADD CONSTRAINT cst_user_name_length users.drop(engine) ALTER TABLE users DROP CONSTRAINT cst_user_name_length DROP TABLE users The real usefulness of the above becomes clearer once we illustrate the DDLEvent.execute_if() method. This method returns a modied form of the DDL callable which will lter on criteria before responding to a received event. It accepts a parameter dialect, which is the string name of a dialect or a tuple of such, which will limit the execution of the item to just those dialects. It also accepts a callable_ parameter which may reference a Python callable which will be invoked upon event reception, returning True or False indicating if the event should proceed. If our CheckConstraint was only supported by Postgresql and not other databases, we could limit its usage to just that dialect: event.listen( users, after_create, AddConstraint(constraint).execute_if(dialect=postgresql) ) event.listen( users, before_drop, DropConstraint(constraint).execute_if(dialect=postgresql) ) Or to any set of dialects: event.listen( users, CHECK (length(user_name) >= 8)
376
"after_create", AddConstraint(constraint).execute_if(dialect=(postgresql, mysql)) ) event.listen( users, "before_drop", DropConstraint(constraint).execute_if(dialect=(postgresql, mysql)) ) When using a callable, the callable is passed the ddl element, the Table or MetaData object whose create or drop event is in progress, and the Connection object being used for the operation, as well as additional information as keyword arguments. The callable can perform checks, such as whether or not a given item already exists. Below we dene should_create() and should_drop() callables that check for the presence of our named constraint:
def should_create(ddl, target, connection, **kw): row = connection.execute("select conname from pg_constraint where conname=%s" % ddl.e return not bool(row) def should_drop(ddl, target, connection, **kw): return not should_create(ddl, target, connection, **kw) event.listen( users, "after_create", AddConstraint(constraint).execute_if(callable_=should_create) ) event.listen( users, "before_drop", DropConstraint(constraint).execute_if(callable_=should_drop) ) users.create(engine) CREATE TABLE users ( user_id SERIAL NOT NULL, user_name VARCHAR(40) NOT NULL, PRIMARY KEY (user_id) ) select conname from pg_constraint where conname=cst_user_name_length ALTER TABLE users ADD CONSTRAINT cst_user_name_length CHECK (length(user_name) >= 8) users.drop(engine) select conname from pg_constraint where conname=cst_user_name_length ALTER TABLE users DROP CONSTRAINT cst_user_name_length DROP TABLE users Custom DDL Custom DDL phrases are most easily achieved using the DDL construct. This construct works like all the other DDL elements except it accepts a string which is the text to be emitted: event.listen( metadata, 3.6. Schema Denition Language 377
"after_create", DDL("ALTER TABLE users ADD CONSTRAINT " "cst_user_name_length " " CHECK (length(user_name) >= 8)") ) A more comprehensive method of creating libraries of DDL constructs is to use custom compilation - see Custom SQL Constructs and Compilation Extension for details. DDL Expression Constructs API class sqlalchemy.schema.DDLElement Bases: sqlalchemy.sql.expression.Executable, sqlalchemy.sql.expression.ClauseElement Base class for DDL expression constructs. This class is the base for the general purpose DDL class, as well as the various create/drop clause constructs such as CreateTable, DropTable, AddConstraint, etc. DDLElement integrates closely with SQLAlchemy events, introduced in Events. An instance of one is itself an event receiving callable: event.listen( users, after_create, AddConstraint(constraint).execute_if(dialect=postgresql) ) See also: DDL DDLEvents Events Controlling DDL Sequences against(target) Return a copy of this DDL against a specic schema item. bind execute(bind=None, target=None) Execute this DDL immediately. Executes the DDL statement in isolation using the supplied Connectable or Connectable assigned to the .bind property, if not supplied. If the DDL has a conditional on criteria, it will be invoked with None as the event. Parameters bind Optional, an Engine or Connection. If not supplied, a valid Connectable must be present in the .bind property. target Optional, defaults to None. The target SchemaItem for the execute call. Will be passed to the on callable if any, and may also provide string expansion data for the statement. See execute_at for more information.
378
execute_at(event_name, target) Link execution of this DDL to the DDL lifecycle of a SchemaItem. Deprecated since version 0.7: See DDLEvents, as well as DDLElement.execute_if(). Links this DDLElement to a Table or MetaData instance, executing it when that schema item is created or dropped. The DDL statement will be executed using the same Connection and transactional context as the Table create/drop itself. The .bind property of this statement is ignored. Parameters event One of the events dened in the schema items .ddl_events; e.g. beforecreate, after-create, before-drop or after-drop target The Table or MetaData instance for which this DDLElement will be associated with. A DDLElement instance can be linked to any number of schema items. execute_at builds on the append_ddl_listener interface of MetaData and Table objects. Caveat: Creating or dropping a Table in isolation will also trigger any DDL set to execute_at that Tables MetaData. This may change in a future release. execute_if(dialect=None, callable_=None, state=None) Return a callable that will execute this DDLElement conditionally. Used to provide a wrapper for event listening: event.listen( metadata, before_create, DDL("my_ddl").execute_if(dialect=postgresql) ) Parameters dialect May be a string, tuple or a callable predicate. If a string, it will be compared to the name of the executing database dialect: DDL(something).execute_if(dialect=postgresql) If a tuple, species multiple dialect names: DDL(something).execute_if(dialect=(postgresql, mysql)) callable A callable, which will be invoked with four positional arguments as well as optional keyword arguments: ddl This DDL element. target The Table or MetaData object which is the target of this event. May be None if the DDL is executed explicitly. bind The Connection being used for DDL execution tables Optional keyword argument - a list of Table objects which are to be created/ dropped within a MetaData.create_all() or drop_all() method call. state Optional keyword argument - will be the state argument passed to this function.
379
checkrst Keyword argument, will be True if the checkrst ag was set during the call to create(), create_all(), drop(), drop_all(). If the callable returns a true value, the DDL statement will be executed. state any value which will be passed to the callable_ as the state keyword argument. See also: DDLEvents Events class sqlalchemy.schema.DDL(statement, on=None, context=None, bind=None) Bases: sqlalchemy.schema.DDLElement A literal DDL statement. Species literal SQL DDL to be executed by the database. DDL objects function as DDL event listeners, and can be subscribed to those events listed in DDLEvents, using either Table or MetaData objects as targets. Basic templating support allows a single DDL instance to handle repetitive tasks for multiple tables. Examples: from sqlalchemy import event, DDL tbl = Table(users, metadata, Column(uid, Integer)) event.listen(tbl, before_create, DDL(DROP TRIGGER users_trigger)) spow = DDL(ALTER TABLE %(table)s SET secretpowers TRUE) event.listen(tbl, after_create, spow.execute_if(dialect=somedb)) drop_spow = DDL(ALTER TABLE users SET secretpowers FALSE) connection.execute(drop_spow) When operating on Table events, the following statement string substitions are available: %(table)s - the Table name, with any required quoting applied %(schema)s - the schema name, with any required quoting applied %(fullname)s - the Table name including schema, quoted if needed The DDLs context, if any, will be combined with the standard substutions noted above. Keys present in the context will override the standard substitutions. __init__(statement, on=None, context=None, bind=None) Create a DDL statement. Parameters statement A string or unicode string to be executed. Statements will be processed with Pythons string formatting operator. See the context argument and the execute_at method. A literal % in a statement must be escaped as %%. SQL bind parameters are not available in DDL statements. on Deprecated. See DDLElement.execute_if(). Optional ltering criteria. May be a string, tuple or a callable predicate. If a string, it will be compared to the name of the executing database dialect: DDL(something, on=postgresql)
380
If a tuple, species multiple dialect names: DDL(something, on=(postgresql, mysql)) If a callable, it will be invoked with four positional arguments as well as optional keyword arguments: ddl This DDL element. event The name of the event that has triggered this DDL, such as after-create Will be None if the DDL is executed explicitly. target The Table or MetaData object which is the target of this event. May be None if the DDL is executed explicitly. connection The Connection being used for DDL execution tables Optional keyword argument - a list of Table objects which are to be created/ dropped within a MetaData.create_all() or drop_all() method call. If the callable returns a true value, the DDL statement will be executed. context Optional dictionary, defaults to None. These values will be available for use in string substitutions on the DDL statement. bind Optional. A Connectable, used by default when execute() is invoked without a bind argument. See also: DDLEvents sqlalchemy.event class sqlalchemy.schema.CreateTable(element, on=None, bind=None) Bases: sqlalchemy.schema._CreateDropBase Represent a CREATE TABLE statement. class sqlalchemy.schema.DropTable(element, on=None, bind=None) Bases: sqlalchemy.schema._CreateDropBase Represent a DROP TABLE statement. class sqlalchemy.schema.CreateSequence(element, on=None, bind=None) Bases: sqlalchemy.schema._CreateDropBase Represent a CREATE SEQUENCE statement. class sqlalchemy.schema.DropSequence(element, on=None, bind=None) Bases: sqlalchemy.schema._CreateDropBase Represent a DROP SEQUENCE statement. class sqlalchemy.schema.CreateIndex(element, on=None, bind=None) Bases: sqlalchemy.schema._CreateDropBase Represent a CREATE INDEX statement. class sqlalchemy.schema.DropIndex(element, on=None, bind=None) Bases: sqlalchemy.schema._CreateDropBase Represent a DROP INDEX statement.
381
class sqlalchemy.schema.AddConstraint(element, *args, **kw) Bases: sqlalchemy.schema._CreateDropBase Represent an ALTER TABLE ADD CONSTRAINT statement. class sqlalchemy.schema.DropConstraint(element, cascade=False, **kw) Bases: sqlalchemy.schema._CreateDropBase Represent an ALTER TABLE DROP CONSTRAINT statement.
382
class sqlalchemy.types.Date(*args, **kwargs) Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine A type for datetime.date() objects. class sqlalchemy.types.DateTime(timezone=False) Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine A type for datetime.datetime() objects. Date and time types return objects from the Python datetime module. Most DBAPIs have built in support for the datetime module, with the noted exception of SQLite. In the case of SQLite, date and time types are stored as strings which are then converted back to datetime objects when rows are returned. class sqlalchemy.types.Enum(*enums, **kw) Bases: sqlalchemy.types.String, sqlalchemy.types.SchemaType Generic Enum Type. The Enum type provides a set of possible string values which the column is constrained towards. By default, uses the backends native ENUM type if available, else uses VARCHAR + a CHECK constraint. __init__(*enums, **kw) Construct an enum. Keyword arguments which dont apply to a specic backend are ignored by that backend. Parameters *enums string or unicode enumeration labels. If unicode labels are present, the convert_unicode ag is auto-enabled. convert_unicode Enable unicode-aware bind parameter and result-set processing for this Enums data. This is set automatically based on the presence of unicode label strings. metadata Associate this type directly with a MetaData object. For types that exist on the target database as an independent schema construct (Postgresql), this type will be created and dropped within create_all() and drop_all() operations. If the type is not associated with any MetaData object, it will associate itself with each Table in which it is used, and will be created when any of those individual tables are created, after a check is performed for its existence. The type is only dropped when drop_all() is called for that Table objects metadata, however. name The name of this type. This is required for Postgresql and any future supported database which requires an explicitly named type, or an explicitly named constraint in order to generate the type and/or a table that uses it. native_enum Use the databases native ENUM type when available. Defaults to True. When False, uses VARCHAR + check constraint for all backends. schema Schemaname of this type. For types that exist on the target database as an independent schema construct (Postgresql), this parameter species the named schema in which the type is present. quote Force quoting to be on or off on the types name. If left as the default of None, the usual schema-level case sensitive/reserved name rules are used to determine if this types name should be quoted. class sqlalchemy.types.Float(precision=None, asdecimal=False, **kwargs) Bases: sqlalchemy.types.Numeric
383
A type for float numbers. Returns Python float objects by default, applying conversion as needed. __init__(precision=None, asdecimal=False, **kwargs) Construct a Float. Parameters precision the numeric precision for use in DDL CREATE TABLE. asdecimal the same ag as that of Numeric, but defaults to False. Note that setting this ag to True results in oating point conversion. **kwargs deprecated. Additional arguments here are ignored by the default Float type. For database specic oats that support additional arguments, see that dialects documentation for details, such as sqlalchemy.dialects.mysql.FLOAT. class sqlalchemy.types.Integer(*args, **kwargs) Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine A type for int integers. class sqlalchemy.types.Interval(native=True, second_precision=None, day_precision=None) Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeDecorator A type for datetime.timedelta() objects. The Interval type deals with datetime.timedelta objects. In PostgreSQL, the native INTERVAL type is used; for others, the value is stored as a date which is relative to the epoch (Jan. 1, 1970). Note that the Interval type does not currently provide date arithmetic operations on platforms which do not support interval types natively. Such operations usually require transformation of both sides of the expression (such as, conversion of both sides into integer epoch values rst) which currently is a manual procedure (such as via func). __init__(native=True, second_precision=None, day_precision=None) Construct an Interval object. Parameters native when True, use the actual INTERVAL type provided by the database, if supported (currently Postgresql, Oracle). Otherwise, represent the interval data as an epoch value regardless. second_precision For native interval types which support a fractional seconds precision parameter, i.e. Oracle and Postgresql day_precision for native interval types which support a day precision parameter, i.e. Oracle. impl alias of DateTime class sqlalchemy.types.LargeBinary(length=None) Bases: sqlalchemy.types._Binary A type for large binary byte data. The Binary type generates BLOB or BYTEA when tables are created, and also converts incoming values using the Binary callable provided by each DB-API. __init__(length=None) Construct a LargeBinary type.
384
Parameters length optional, a length for the column for use in DDL statements, for those BLOB types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY type - use the BINARY/VARBINARY types specically for those. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. class sqlalchemy.types.Numeric(precision=None, scale=None, asdecimal=True) Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine A type for xed precision numbers. Typically generates DECIMAL or NUMERIC. Returns decimal.Decimal objects by default, applying conversion as needed. Note: The cdecimal library is a high performing alternative to Pythons built-in decimal.Decimal type, which performs very poorly in high volume situations. SQLAlchemy 0.7 is tested against cdecimal and supports it fully. The type is not necessarily supported by DBAPI implementations however, most of which contain an import for plain decimal in their source code, even though some such as psycopg2 provide hooks for alternate adapters. SQLAlchemy imports decimal globally as well. While the alternate Decimal class can be patched into SQLAs decimal module, overall the most straightforward and foolproof way to use cdecimal given current DBAPI and Python support is to patch it directly into sys.modules before anything else is imported: import sys import cdecimal sys.modules["decimal"] = cdecimal While the global patch is a little ugly, its particularly important to use just one decimal library at a time since Python Decimal and cdecimal Decimal objects are not currently compatible with each other: >>> import cdecimal >>> import decimal >>> decimal.Decimal("10") == cdecimal.Decimal("10") False SQLAlchemy will provide more natural support of cdecimal if and when it becomes a standard part of Python installations and is supported by all DBAPIs. __init__(precision=None, scale=None, asdecimal=True) Construct a Numeric. Parameters precision the numeric precision for use in DDL CREATE TABLE. scale the numeric scale for use in DDL CREATE TABLE. asdecimal default True. Return whether or not values should be sent as Python Decimal objects, or as oats. Different DBAPIs send one or the other based on datatypes - the Numeric type will ensure that return values are one or the other across DBAPIs consistently. When using the Numeric type, care should be taken to ensure that the asdecimal setting is apppropriate for the DBAPI in use - when Numeric applies a conversion from Decimal->oat or oat-> Decimal, this conversion incurs an additional performance overhead for all result columns received. DBAPIs that return Decimal natively (e.g. psycopg2) will have better accuracy and higher performance with a setting of True, as the native translation to Decimal reduces the amount of oating- point issues at play, and the Numeric type itself doesnt need to apply any further conversions. However, another DBAPI
385
which returns oats natively will incur an additional conversion overhead, and is still subject to oating point data loss - in which case asdecimal=False will at least remove the extra conversion overhead. class sqlalchemy.types.PickleType(protocol=2, pickler=None, mutable=False, comparator=None) Bases: sqlalchemy.types.MutableType, sqlalchemy.types.TypeDecorator Holds Python objects, which are serialized using pickle. PickleType builds upon the Binary type to apply Pythons pickle.dumps() to incoming objects, and pickle.loads() on the way out, allowing any pickleable Python object to be stored as a serialized binary eld. __init__(protocol=2, pickler=None, mutable=False, comparator=None) Construct a PickleType. Parameters protocol defaults to pickle.HIGHEST_PROTOCOL. pickler defaults to cPickle.pickle or pickle.pickle if cPickle is not available. May be any object with pickle-compatible dumps and loads methods. mutable defaults to False; implements AbstractType.is_mutable(). When True, incoming objects will be compared against copies of themselves using the Python equals operator, unless the comparator argument is present. See MutableType for details on mutable type behavior. (default changed from True in 0.7.0). Note: This functionality is now superseded by sqlalchemy.ext.mutable extension described in Mutation Tracking. the
comparator a 2-arg callable predicate used to compare values of this type. If left as None, the Python equals operator is used to compare values. impl alias of LargeBinary is_mutable() Return True if the target Python type is mutable. When this method is overridden, copy_value() should also be supplied. The MutableType mixin is recommended as a helper. class sqlalchemy.types.SchemaType(**kw) Bases: sqlalchemy.events.SchemaEventTarget Mark a type as possibly requiring schema-level DDL for usage. Supports types that must be explicitly created/dropped (i.e. PG ENUM type) as well as types that are complimented by table or schema level constraints, triggers, and other rules. SchemaType classes can also be targets for the DDLEvents.before_parent_attach() and DDLEvents.after_parent_attach() events, where the events re off surrounding the association of the type object with a parent Column. bind create(bind=None, checkrst=False) Issue CREATE ddl for this type, if applicable. drop(bind=None, checkrst=False) Issue DROP ddl for this type, if applicable.
386
class sqlalchemy.types.SmallInteger(*args, **kwargs) Bases: sqlalchemy.types.Integer A type for smaller int integers. Typically generates a SMALLINT in DDL, and otherwise acts like a normal Integer on the Python side. class sqlalchemy.types.String(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False) Bases: sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine The base for all string and character types. In SQL, corresponds to VARCHAR. Can also take Python unicode objects and encode to the databases encoding in bind params (and the reverse for result sets.) The length eld is usually required when the String type is used within a CREATE TABLE statement, as VARCHAR requires a length on most databases. __init__(length=None, convert_unicode=False, _warn_on_bytestring=False) Create a string-holding type. Parameters length optional, a length for the column for use in DDL statements. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. Whether the value is interpreted as bytes or characters is database specic. convert_unicode defaults to False. If True, the type will do what is necessary in order to accept Python Unicode objects as bind parameters, and to return Python Unicode objects in result rows. This may require SQLAlchemy to explicitly coerce incoming Python unicodes into an encoding, and from an encoding back to Unicode, or it may not require any interaction from SQLAlchemy at all, depending on the DBAPI in use. When SQLAlchemy performs the encoding/decoding, the encoding used is congured via encoding, which defaults to utf-8. The convert_unicode behavior can also be turned on for all String types by setting sqlalchemy.engine.base.Dialect.convert_unicode on create_engine(). To instruct SQLAlchemy to perform Unicode encoding/decoding even on a platform that already handles Unicode natively, set convert_unicode=force. This will incur signicant performance overhead when fetching unicode result columns. assert_unicode Deprecated. A warning is raised in all cases when a nonUnicode object is passed when SQLAlchemy would coerce into an encoding (note: but not when the DBAPI handles unicode objects natively). To suppress or raise this warning to an error, use the Python warnings lter documented at: http://docs.python.org/library/warnings.html unicode_error Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard librarys string.decode() functions. This ag requires that convert_unicode is set to force - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this ag adds signicant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This ag should only be used as an absolute last resort for reading 3.7. Column and Data Types 387 assert_unicode=None, unicode_error=None,
strings from a column with varied or corrupted encodings, which only applies to databases that accept invalid encodings in the rst place (i.e. MySQL. not PG, Sqlite, etc.) class sqlalchemy.types.Text(length=None, convert_unicode=False, assert_unicode=None, code_error=None, _warn_on_bytestring=False) Bases: sqlalchemy.types.String A variably sized string type. In SQL, usually corresponds to CLOB or TEXT. Can also take Python unicode objects and encode to the databases encoding in bind params (and the reverse for result sets.) class sqlalchemy.types.Time(timezone=False) Bases: sqlalchemy.types._DateAffinity, sqlalchemy.types.TypeEngine A type for datetime.time() objects. class sqlalchemy.types.Unicode(length=None, **kwargs) Bases: sqlalchemy.types.String A variable length Unicode string. The Unicode type is a String which converts Python unicode objects (i.e., strings that are dened as usomevalue) into encoded bytestrings when passing the value to the database driver, and similarly decodes values from the database back into Python unicode objects. Its roughly equivalent to using a String object with convert_unicode=True, however the type has other signicances in that it implies the usage of a unicode-capable type being used on the backend, such as NVARCHAR. This may affect what type is emitted when issuing CREATE TABLE and also may effect some DBAPI-specic details, such as type information passed along to setinputsizes(). When using the Unicode type, it is only appropriate to pass Python unicode objects, and not plain str. If a bytestring (str) is passed, a runtime warning is issued. If you notice your application raising these warnings but youre not sure where, the Python warnings lter can be used to turn these warnings into exceptions which will illustrate a stack trace: import warnings warnings.simplefilter(error) Bytestrings sent to and received from the database are encoded using the dialects encoding, which defaults to utf-8. __init__(length=None, **kwargs) Create a Unicode-converting String type. Parameters length optional, a length for the column for use in DDL statements. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. Whether the value is interpreted as bytes or characters is database specic. **kwargs passed through to the underlying String type. class sqlalchemy.types.UnicodeText(length=None, **kwargs) Bases: sqlalchemy.types.Text An unbounded-length Unicode string. See Unicode for details on the unicode behavior of this object. uni-
388
Like Unicode, usage the UnicodeText type implies a unicode-capable type being used on the backend, such as NCLOB. __init__(length=None, **kwargs) Create a Unicode-converting Text type. Parameters length optional, a length for the column for use in DDL statements. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. Whether the value is interpreted as bytes or characters is database specic.
389
class sqlalchemy.types.FLOAT(precision=None, asdecimal=False, **kwargs) Bases: sqlalchemy.types.Float The SQL FLOAT type. sqlalchemy.types.INT alias of INTEGER class sqlalchemy.types.INTEGER(*args, **kwargs) Bases: sqlalchemy.types.Integer The SQL INT or INTEGER type. class sqlalchemy.types.NCHAR(length=None, **kwargs) Bases: sqlalchemy.types.Unicode The SQL NCHAR type. class sqlalchemy.types.NVARCHAR(length=None, **kwargs) Bases: sqlalchemy.types.Unicode The SQL NVARCHAR type. class sqlalchemy.types.NUMERIC(precision=None, scale=None, asdecimal=True) Bases: sqlalchemy.types.Numeric The SQL NUMERIC type. class sqlalchemy.types.REAL(precision=None, asdecimal=False, **kwargs) Bases: sqlalchemy.types.Float The SQL REAL type. class sqlalchemy.types.SMALLINT(*args, **kwargs) Bases: sqlalchemy.types.SmallInteger The SQL SMALLINT type. class sqlalchemy.types.TEXT(length=None, convert_unicode=False, assert_unicode=None, code_error=None, _warn_on_bytestring=False) Bases: sqlalchemy.types.Text The SQL TEXT type. class sqlalchemy.types.TIME(timezone=False) Bases: sqlalchemy.types.Time The SQL TIME type. class sqlalchemy.types.TIMESTAMP(timezone=False) Bases: sqlalchemy.types.DateTime The SQL TIMESTAMP type. class sqlalchemy.types.VARBINARY(length=None) Bases: sqlalchemy.types._Binary The SQL VARBINARY type. class sqlalchemy.types.VARCHAR(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False) Bases: sqlalchemy.types.String The SQL VARCHAR type. uni-
390
For example, MySQL has a BIGINTEGER type and PostgreSQL has an INET type. To use these, import them from the module explicitly: from sqlalchemy.dialects import mysql table = Table(foo, meta, Column(id, mysql.BIGINTEGER), Column(enumerates, mysql.ENUM(a, b, c)) ) Or some PostgreSQL types: from sqlalchemy.dialects import postgresql table = Table(foo, meta, Column(ipaddress, postgresql.INET), Column(elements, postgresql.ARRAY(str)) ) Each dialect provides the full set of typenames supported by that backend within its __all__ collection, so that a simple import * or similar will import all supported types as implemented for that backend: from sqlalchemy.dialects.postgresql import * t = Table(mytable, metadata, Column(id, INTEGER, primary_key=True), Column(name, VARCHAR(300)), Column(inetaddr, INET) ) Where above, the INTEGER and VARCHAR types are ultimately from sqlalchemy.types, and INET is specic to the Postgresql dialect. Some dialect level types have the same name as the SQL standard type, but also provide additional arguments. For example, MySQL implements the full range of character and string types including additional arguments such as collation and charset: from sqlalchemy.dialects.mysql import VARCHAR, TEXT table = Table(foo, meta, Column(col1, VARCHAR(200, collation=binary)), Column(col2, TEXT(charset=latin1)) )
BINARY for all platforms except for one, in which is wants BLOB to be rendered. Usage of an existing generic type, in this case LargeBinary, is preferred for most use cases. But to control types more accurately, a compilation directive that is per-dialect can be associated with any type: from sqlalchemy.ext.compiler import compiles from sqlalchemy.types import BINARY @compiles(BINARY, "sqlite") def compile_binary_sqlite(type_, compiler, **kw): return "BLOB" The above code allows the usage of types.BINARY, which will produce the string BINARY against all backends except SQLite, in which case it will produce BLOB. See the section Changing Compilation of Types, a subsection of Custom SQL Constructs and Compilation Extension, for additional examples. Augmenting Existing Types The TypeDecorator allows the creation of custom types which add bind-parameter and result-processing behavior to an existing type object. It is used when additional in-Python marshalling of data to and from the database is required. class sqlalchemy.types.TypeDecorator(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine Allows the creation of types which add additional functionality to an existing type. This method is preferred to direct subclassing of SQLAlchemys built-in types as it ensures that all required functionality of the underlying type is kept in place. Typical usage: import sqlalchemy.types as types class MyType(types.TypeDecorator): Prefixes Unicode values with "PREFIX:" on the way in and strips it off on the way out. impl = types.Unicode def process_bind_param(self, value, dialect): return "PREFIX:" + value def process_result_value(self, value, dialect): return value[7:] def copy(self): return MyType(self.impl.length) The class-level impl variable is required, and can reference any TypeEngine class. Alternatively, the load_dialect_impl() method can be used to provide different type classes based on the dialect given; in this case, the impl variable can reference TypeEngine as a placeholder. Types that receive a Python type that isnt similar to the ultimate type used may want to dene the TypeDecorator.coerce_compared_value() method. This is used to give the expression system a hint when coercing Python objects into bind parameters within expressions. Consider this expression: mytable.c.somecol + datetime.date(2009, 5, 15) 392 Chapter 3. SQLAlchemy Core
Above, if somecol is an Integer variant, it makes sense that were doing date arithmetic, where above is usually interpreted by databases as adding a number of days to the given date. The expression system does the right thing by not attempting to coerce the date() value into an integer-oriented bind parameter. However, in the case of TypeDecorator, we are usually changing an incoming Python type to something new - TypeDecorator by default will coerce the non-typed side to be the same type as itself. Such as below, we dene an epoch type that stores a date value as an integer: class MyEpochType(types.TypeDecorator): impl = types.Integer epoch = datetime.date(1970, 1, 1) def process_bind_param(self, value, dialect): return (value - self.epoch).days def process_result_value(self, value, dialect): return self.epoch + timedelta(days=value) Our expression of somecol + date with the above type will coerce the date on the right side to also be treated as MyEpochType. This behavior can be overridden via the coerce_compared_value() method, which returns a type that should be used for the value of the expression. Below we set it such that an integer value will be treated as an Integer, and any other value is assumed to be a date and will be treated as a MyEpochType: def coerce_compared_value(self, op, value): if isinstance(value, int): return Integer() else: return self __init__(*args, **kwargs) Construct a TypeDecorator. Arguments sent here are passed to the constructor of the class assigned to the impl class level attribute, where the self.impl attribute is assigned an instance of the implementation type. If impl at the class level is already an instance, then its assigned to self.impl as is. Subclasses can override this to customize the generation of self.impl. adapt(cls, **kw) Produce an adapted form of this type, given an impl class to work with. This method is used internally to associate generic types with implementation types that are specic to a particular dialect. bind_processor(dialect) Provide a bound value processing function for the given Dialect. This is the method that fullls the TypeEngine contract for bound value conversion. TypeDecorator will wrap a user-dened implementation of process_bind_param() here. User-dened code can override this method directly, though its likely best process_bind_param() so that the processing provided by self.impl is maintained. coerce_compared_value(op, value) Suggest a type for a coerced Python value in an expression. By default, returns self. This method is called by the expression system when an object using this type is on the left or right side of an expression against a plain Python object which does not yet have a to use
393
SQLAlchemy type assigned: expr = table.c.somecolumn + 35 Where above, if somecolumn uses this type, this method will be called with the value operator.add and 35. The return value is whatever SQLAlchemy type should be used for 35 for this particular operation. compare_values(x, y) Given two values, compare them for equality. By default this calls upon TypeEngine.compare_values() of the underlying impl, which in turn usually uses the Python equals operator ==. This function is used by the ORM to compare an original-loaded value with an intercepted changed value, to determine if a net change has occurred. compile(dialect=None) Produce a string-compiled form of this TypeEngine. When called with no arguments, uses a default dialect to produce a string result. Parameters dialect a Dialect instance. copy() Produce a copy of this TypeDecorator instance. This is a shallow copy and is provided to fulll part of the TypeEngine contract. It usually does not need to be overridden unless the user-dened TypeDecorator has local state that should be deep-copied. copy_value(value) Given a value, produce a copy of it. By default this calls upon TypeEngine.copy_value() of the underlying impl. copy_value() will return the object itself, assuming mutability is not enabled. Only the MutableType mixin provides a copy function that actually produces a new object. The copying function is used by the ORM when mutable types are used, to memoize the original version of an object as loaded from the database, which is then compared to the possibly mutated version to check for changes. Modern implementations should use the sqlalchemy.ext.mutable extension described in Mutation Tracking for intercepting in-place changes to values. dialect_impl(dialect) Return a dialect-specic implementation for this TypeEngine. get_dbapi_type(dbapi) Return the DBAPI type object represented by this TypeDecorator. By default this calls upon TypeEngine.get_dbapi_type() of the underlying impl. is_mutable() Return True if the target Python type is mutable. This allows systems like the ORM to know if a column value can be considered not changed by comparing the identity of objects alone. Values such as dicts, lists which are serialized into strings are examples of mutable column structures. Note: This functionality is now superseded by the sqlalchemy.ext.mutable extension described in Mutation Tracking. load_dialect_impl(dialect) Return a TypeEngine object corresponding to a dialect.
394
This is an end-user override hook that can be used to provide differing types depending on the given dialect. It is used by the TypeDecorator implementation of type_engine() to help determine what type should ultimately be returned for a given TypeDecorator. By default returns self.impl. process_bind_param(value, dialect) Receive a bound parameter value to be converted. Subclasses override this method to return the value that should be passed along to the underlying TypeEngine object, and from there to the DBAPI execute() method. Parameters value the value. Can be None. dialect the Dialect in use. process_result_value(value, dialect) Receive a result-row column value to be converted. Subclasses override this method to return the value that should be passed back to the application, given a value that is already processed by the underlying TypeEngine object, originally from the DBAPI cursor method fetchone() or similar. Parameters value the value. Can be None. dialect the Dialect in use. result_processor(dialect, coltype) Provide a result value processing function for the given Dialect. This is the method that fullls the TypeEngine contract for result value conversion. TypeDecorator will wrap a user-dened implementation of process_result_value() here. User-dened code can override this method directly, though its likely best to process_result_value() so that the processing provided by self.impl is maintained. type_engine(dialect) Return a dialect-specic TypeEngine instance for this TypeDecorator. In most cases this returns a dialect-adapted form of the TypeEngine type represented by self.impl. Makes usage of dialect_impl() but also traverses into wrapped TypeDecorator instances. Behavior can be customized here by overriding load_dialect_impl(). with_variant(type_, dialect_name) Produce a new type object that will utilize the given type when applied to the dialect of the given name. e.g.: from sqlalchemy.types import String from sqlalchemy.dialects import mysql s = String() s = s.with_variant(mysql.VARCHAR(collation=foo), mysql) The construction of TypeEngine.with_variant() is always from the fallback type to that which is dialect specic. The returned type is an instance of Variant, which itself provides a with_variant() that can be called repeatedly. use
395
Parameters type a TypeEngine that will be selected as a variant from the originating type, when a dialect of the given name is in use. dialect_name base name of the dialect which uses this type. postgresql, mysql, etc.) New in 0.7.2. TypeDecorator Recipes A few key TypeDecorator recipes follow. (i.e.
Rounding Numerics
Some database connectors like those of SQL Server choke if a Decimal is passed with too many decimal places. Heres a recipe that rounds them down: from sqlalchemy.types import TypeDecorator, Numeric from decimal import Decimal class SafeNumeric(TypeDecorator): """Adds quantization to Numeric.""" impl = Numeric def __init__(self, *arg, **kw): TypeDecorator.__init__(self, *arg, **kw) self.quantize_int = -(self.impl.precision - self.impl.scale) self.quantize = Decimal(10) ** self.quantize_int 396 Chapter 3. SQLAlchemy Core
def process_bind_param(self, value, dialect): if isinstance(value, Decimal) and \ value.as_tuple()[2] < self.quantize_int: value = value.quantize(self.quantize) return value
397
398
def bind_processor(self, dialect): def process(value): return value return process def result_processor(self, dialect, coltype): def process(value): return value return process Once the type is made, its immediately usable: table = Table(foo, meta, Column(id, Integer, primary_key=True), Column(data, MyType(16)) ) __init__(*args, **kwargs) Support implementations that were passing arguments adapt(cls, **kw) Produce an adapted form of this type, given an impl class to work with. This method is used internally to associate generic types with implementation types that are specic to a particular dialect. adapt_operator(op) A hook which allows the given operator to be adapted to something new. See also UserDenedType._adapt_expression(), an as-yet- semi-public method with greater capability in this regard. bind_processor(dialect) Return a conversion function for processing bind values. Returns a callable which will receive a bind parameter value as the sole positional argument and will return a value to send to the DB-API. If processing is not necessary, the method should return None. Parameters dialect Dialect instance in use. compare_values(x, y) Compare two values for equality. compile(dialect=None) Produce a string-compiled form of this TypeEngine. When called with no arguments, uses a default dialect to produce a string result. Parameters dialect a Dialect instance. copy_value(value) dialect_impl(dialect) Return a dialect-specic implementation for this TypeEngine. get_dbapi_type(dbapi) Return the corresponding type object from the underlying DB-API, if any. This can be useful for calling setinputsizes(), for example. is_mutable() Return True if the target Python type is mutable.
399
This allows systems like the ORM to know if a column value can be considered not changed by comparing the identity of objects alone. Values such as dicts, lists which are serialized into strings are examples of mutable column structures. Note: This functionality is now superseded by the sqlalchemy.ext.mutable extension described in Mutation Tracking. When this method is overridden, copy_value() should also be supplied. The MutableType mixin is recommended as a helper. result_processor(dialect, coltype) Return a conversion function for processing result row values. Returns a callable which will receive a result row column value as the sole positional argument and will return a value to return to the user. If processing is not necessary, the method should return None. Parameters dialect Dialect instance in use. coltype DBAPI coltype argument received in cursor.description. with_variant(type_, dialect_name) Produce a new type object that will utilize the given type when applied to the dialect of the given name. e.g.: from sqlalchemy.types import String from sqlalchemy.dialects import mysql s = String() s = s.with_variant(mysql.VARCHAR(collation=foo), mysql) The construction of TypeEngine.with_variant() is always from the fallback type to that which is dialect specic. The returned type is an instance of Variant, which itself provides a with_variant() that can be called repeatedly. Parameters type a TypeEngine that will be selected as a variant from the originating type, when a dialect of the given name is in use. dialect_name base name of the dialect which uses this type. postgresql, mysql, etc.) New in 0.7.2. (i.e.
400
class sqlalchemy.types.TypeEngine(*args, **kwargs) Bases: sqlalchemy.types.AbstractType Base for built-in types. __init__(*args, **kwargs) Support implementations that were passing arguments adapt(cls, **kw) Produce an adapted form of this type, given an impl class to work with. This method is used internally to associate generic types with implementation types that are specic to a particular dialect. bind_processor(dialect) Return a conversion function for processing bind values. Returns a callable which will receive a bind parameter value as the sole positional argument and will return a value to send to the DB-API. If processing is not necessary, the method should return None. Parameters dialect Dialect instance in use. compare_values(x, y) Compare two values for equality. compile(dialect=None) Produce a string-compiled form of this TypeEngine. When called with no arguments, uses a default dialect to produce a string result. Parameters dialect a Dialect instance. copy_value(value) dialect_impl(dialect) Return a dialect-specic implementation for this TypeEngine. get_dbapi_type(dbapi) Return the corresponding type object from the underlying DB-API, if any. This can be useful for calling setinputsizes(), for example. is_mutable() Return True if the target Python type is mutable. This allows systems like the ORM to know if a column value can be considered not changed by comparing the identity of objects alone. Values such as dicts, lists which are serialized into strings are examples of mutable column structures. Note: This functionality is now superseded by the sqlalchemy.ext.mutable extension described in Mutation Tracking. When this method is overridden, copy_value() should also be supplied. The MutableType mixin is recommended as a helper. result_processor(dialect, coltype) Return a conversion function for processing result row values. Returns a callable which will receive a result row column value as the sole positional argument and will return a value to return to the user. If processing is not necessary, the method should return None. Parameters
401
dialect Dialect instance in use. coltype DBAPI coltype argument received in cursor.description. with_variant(type_, dialect_name) Produce a new type object that will utilize the given type when applied to the dialect of the given name. e.g.: from sqlalchemy.types import String from sqlalchemy.dialects import mysql s = String() s = s.with_variant(mysql.VARCHAR(collation=foo), mysql) The construction of TypeEngine.with_variant() is always from the fallback type to that which is dialect specic. The returned type is an instance of Variant, which itself provides a with_variant() that can be called repeatedly. Parameters type a TypeEngine that will be selected as a variant from the originating type, when a dialect of the given name is in use. dialect_name base name of the dialect which uses this type. postgresql, mysql, etc.) New in 0.7.2. class sqlalchemy.types.MutableType Bases: object A mixin that marks a TypeEngine as representing a mutable Python object type. This functionality is used only by the ORM. Note: MutableType is superseded as of SQLAlchemy 0.7 by the sqlalchemy.ext.mutable extension described in Mutation Tracking. This extension provides an event driven approach to in-place mutation detection that does not incur the severe performance penalty of the MutableType approach. mutable means that changes can occur in place to a value of this type. Examples includes Python lists, dictionaries, and sets, as well as user-dened objects. The primary need for identication of mutable types is by the ORM, which applies special rules to such values in order to guarantee that changes are detected. These rules may have a signicant performance impact, described below. A MutableType usually allows a ag called mutable=False to enable/disable the mutability ag, represented on this class by is_mutable(). Examples include PickleType and ARRAY. Setting this ag to True enables mutability-specic behavior by the ORM. The copy_value() and compare_values() functions represent a copy and compare function for values of this type - implementing subclasses should override these appropriately. (i.e.
402
Warning: The usage of mutable types has signicant performance implications when using the ORM. In order to detect changes, the ORM must create a copy of the value when it is rst accessed, so that changes to the current value can be compared against the clean database-loaded value. Additionally, when the ORM checks to see if any data requires ushing, it must scan through all instances in the session which are known to have mutable attributes and compare the current value of each one to its clean value. So for example, if the Session contains 6000 objects (a fairly large amount) and autoush is enabled, every individual execution of Query will require a full scan of that subset of the 6000 objects that have mutable attributes, possibly resulting in tens of thousands of additional method calls for every query. As of SQLAlchemy 0.7, the sqlalchemy.ext.mutable is provided which allows an event driven approach to in-place mutation detection. This approach should now be favored over the usage of MutableType with mutable=True. sqlalchemy.ext.mutable is described in Mutation Tracking. __init__ x.__init__(...) initializes x; see x.__class__.__doc__ for signature compare_values(x, y) Compare x == y. copy_value(value) Unimplemented. is_mutable() Return True if the target Python type is mutable. For MutableType, this method is set to return True. class sqlalchemy.types.Concatenable Bases: object A mixin that marks a type as supporting concatenation, typically strings. __init__ x.__init__(...) initializes x; see x.__class__.__doc__ for signature class sqlalchemy.types.NullType(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine An unknown type. NullTypes will stand in if Table reection encounters a column data type unknown to SQLAlchemy. The resulting columns are nearly fully usable: the DB-API adapter will handle all translation to and from the database data type. NullType does not have sufcient information to particpate in a CREATE TABLE statement and will raise an exception if encountered during a create() operation. class sqlalchemy.types.Variant(base, mapping) Bases: sqlalchemy.types.TypeDecorator A wrapping type that selects among a variety of implementations based on dialect in use. The Variant type is typically constructed using the TypeEngine.with_variant() method. New in 0.7.2. with_variant(type_, dialect_name) Return a new Variant which adds the given type + dialect name to the mapping, in addition to the mapping present in this Variant. Parameters
403
type a TypeEngine that will be selected as a variant from the originating type, when a dialect of the given name is in use. dialect_name base name of the dialect which uses this type. postgresql, mysql, etc.) New in 0.7.2. __init__(base, mapping) Construct a new Variant. Parameters base the base fallback type mapping dictionary of string dialect names to TypeEngine instances. (i.e.
3.8 Events
SQLAlchemy includes an event API which publishes a wide variety of hooks into the internals of both SQLAlchemy Core and ORM. The system is all new as of version 0.7 and supercedes the previous system of extension, proxy, and listener classes.
3.8.2 Targets
The listen() function is very exible regarding targets. It generally accepts classes, instances of those classes, and related classes or objects from which the appropriate target can be derived. For example, the above mentioned "connect" event accepts Engine classes and objects as well as Pool classes and objects: from sqlalchemy.event import listen from sqlalchemy.pool import Pool, QueuePool from sqlalchemy import create_engine from sqlalchemy.engine import Engine import psycopg2
404
def connect(): return psycopg2.connect(username=ed, host=127.0.0.1, dbname=test) my_pool = QueuePool(connect) my_engine = create_engine(postgresql://ed@localhost/test) # associate listener with all instances of Pool listen(Pool, connect, my_on_connect) # associate listener with all instances of Pool # via the Engine class listen(Engine, connect, my_on_connect) # associate listener with my_pool listen(my_pool, connect, my_on_connect) # associate listener with my_engine.pool listen(my_engine, connect, my_on_connect)
3.8.3 Modiers
Some listeners allow modiers to be passed to listen(). These modiers sometimes provide alternate calling signatures for listeners. Such as with ORM events, some event listeners can have a return value which modies the subsequent handling. By default, no listener ever requires a return value, but by passing retval=True this value can be supported: def validate_phone(target, value, oldvalue, initiator): """Strip non-numeric characters from a phone number""" return re.sub(r(?![0-9]), , value) # setup listener on UserContact.phone attribute, instructing # it to use the return value listen(UserContact.phone, set, validate_phone, retval=True)
3.8. Events
405
from sqlalchemy import event from sqlalchemy.schema import UniqueConstraint def unique_constraint_name(const, table): const.name = "uq_%s_%s" % ( table.name, list(const.columns)[0].name ) event.listen( UniqueConstraint, "after_parent_attach", unique_constraint_name) sqlalchemy.event.listens_for(target, identier, *args, **kw) Decorate a function as a listener for the given target + identier. e.g.: from sqlalchemy import event from sqlalchemy.schema import UniqueConstraint @event.listens_for(UniqueConstraint, "after_parent_attach") def unique_constraint_name(const, table): const.name = "uq_%s_%s" % ( table.name, list(const.columns)[0].name )
406
engine = create_engine("postgresql://scott:tiger@localhost/test") # will associate with engine.pool events.listen(engine, checkout, my_on_checkout) checkin(dbapi_connection, connection_record) Called when a connection returns to the pool. Note that the connection may be closed, and may be None if the connection has been invalidated. checkin will not be called for detached connections. (They do not return to the pool.) Parameters dbapi_con A raw DB-API connection con_record The _ConnectionRecord that persistently manages the connection checkout(dbapi_connection, connection_record, connection_proxy) Called when a connection is retrieved from the Pool. Parameters dbapi_con A raw DB-API connection con_record The _ConnectionRecord that persistently manages the connection con_proxy The _ConnectionFairy which manages the connection for the span of the current checkout. If you raise an exc.DisconnectionError, the current connection will be disposed and a fresh connection retrieved. Processing of all checkout listeners will abort and restart using the new connection. connect(dbapi_connection, connection_record) Called once for each new DB-API connection or Pools creator(). Parameters dbapi_con A newly connected raw DB-API connection (not a SQLAlchemy Connection wrapper). con_record The _ConnectionRecord that persistently manages the connection first_connect(dbapi_connection, connection_record) Called exactly once for the rst DB-API connection. Parameters dbapi_con A newly connected raw DB-API connection (not a SQLAlchemy Connection wrapper). con_record The _ConnectionRecord that persistently manages the connection
407
e.g.: from sqlalchemy import event, create_engine def before_execute(conn, clauseelement, multiparams, params): log.info("Received statement: %s" % clauseelement) engine = create_engine(postgresql://scott:tiger@localhost/test) event.listen(engine, "before_execute", before_execute) Some events allow modiers to the listen() function. Parameters retval=False Applies to the before_execute() and before_cursor_execute() events only. When True, the user-dened event function must have a return value, which is a tuple of parameters that replace the given statement and parameters. See those methods for a description of specic return arguments. after_cursor_execute(conn, cursor, statement, parameters, context, executemany) Intercept low-level cursor execute() events. after_execute(conn, clauseelement, multiparams, params, result) Intercept high level execute() events. before_cursor_execute(conn, cursor, statement, parameters, context, executemany) Intercept low-level cursor execute() events. before_execute(conn, clauseelement, multiparams, params) Intercept high level execute() events. begin(conn) Intercept begin() events. begin_twophase(conn, xid) Intercept begin_twophase() events. commit(conn) Intercept commit() events. commit_twophase(conn, xid, is_prepared) Intercept commit_twophase() events. prepare_twophase(conn, xid) Intercept prepare_twophase() events. release_savepoint(conn, name, context) Intercept release_savepoint() events. rollback(conn) Intercept rollback() events. rollback_savepoint(conn, name, context) Intercept rollback_savepoint() events. rollback_twophase(conn, xid, is_prepared) Intercept rollback_twophase() events. savepoint(conn, name=None) Intercept savepoint() events.
MetaData, Table, Column. MetaData and Table support events specically regarding when CREATE and DROP DDL is emitted to the database. Attachment events are also provided to customize behavior whenever a child schema element is associated with a parent, such as, when a Column is associated with its Table, when a ForeignKeyConstraint is associated with a Table, etc. Example using the after_create event: from sqlalchemy import event from sqlalchemy import Table, Column, Metadata, Integer m = MetaData() some_table = Table(some_table, m, Column(data, Integer)) def after_create(target, connection, **kw): connection.execute("ALTER TABLE %s SET name=foo_%s" % (target.name, target.name)) event.listen(some_table, "after_create", after_create) DDL events integrate closely with the DDL class and the DDLElement hierarchy of DDL clause constructs, which are themselves appropriate as listener callables: from sqlalchemy import DDL event.listen( some_table, "after_create", DDL("ALTER TABLE %(table)s SET name=foo_%(table)s") ) The methods here dene the name of an event as well as the names of members that are passed to listener functions. See also: Events DDLElement DDL Controlling DDL Sequences after_create(target, connection, **kw) Called after CREATE statments are emitted. Parameters target the MetaData or Table object which is the target of the event. connection the Connection where the CREATE statement or statements have been emitted. **kw additional keyword arguments relevant to the event. Currently this includes the tables argument in the case of a MetaData object, which is the list of Table objects for which CREATE has been emitted. after_drop(target, connection, **kw) Called after DROP statments are emitted. Parameters
409
target the MetaData or Table object which is the target of the event. connection the Connection where the DROP statement or statements have been emitted. **kw additional keyword arguments relevant to the event. Currently this includes the tables argument in the case of a MetaData object, which is the list of Table objects for which DROP has been emitted. after_parent_attach(target, parent) Called after a SchemaItem is associated with a parent SchemaItem. Parameters target the target object parent the parent to which the target is being attached. event.listen() also accepts a modier for this event: Parameters propagate=False When True, the listener function will be established for any copies made of the target object, i.e. those copies that are generated when Table.tometadata() is used. before_create(target, connection, **kw) Called before CREATE statments are emitted. Parameters target the MetaData or Table object which is the target of the event. connection the Connection where the CREATE statement or statements will be emitted. **kw additional keyword arguments relevant to the event. Currently this includes the tables argument in the case of a MetaData object, which is the list of Table objects for which CREATE will be emitted. before_drop(target, connection, **kw) Called before DROP statments are emitted. Parameters target the MetaData or Table object which is the target of the event. connection the Connection where the DROP statement or statements will be emitted. **kw additional keyword arguments relevant to the event. Currently this includes the tables argument in the case of a MetaData object, which is the list of Table objects for which DROP will be emitted. before_parent_attach(target, parent) Called before a SchemaItem is associated with a parent SchemaItem. Parameters target the target object parent the parent to which the target is being attached. event.listen() also accepts a modier for this event: Parameters propagate=False When True, the listener function will be established for any copies made of the target object, i.e. those copies that are generated when Table.tometadata() is used.
410
column_reflect(table, column_info) Called for each unit of column info retrieved when a Table is being reected. The dictionary of column information as returned by the dialect is passed, and can be modied. The dictionary is that returned in each element of the list returned by reflection.Inspector.get_columns(). The event is called before any action is taken against this dictionary, and the contents can be modied. The Column specic arguments info, key, and quote can also be added to the dictionary and will be passed to the constructor of Column. Note that this event is only meaningful if either associated with the Table class across the board, e.g.: from sqlalchemy.schema import Table from sqlalchemy import event def listen_for_reflect(table, column_info): "receive a column_reflect event" # ... event.listen( Table, column_reflect, listen_for_reflect) ...or with a specic Table instance using the listeners argument: def listen_for_reflect(table, column_info): "receive a column_reflect event" # ... t = Table( sometable, autoload=True, listeners=[ (column_reflect, listen_for_reflect) ]) This because the reection process initiated by autoload=True completes within the scope of the constructor for Table. class sqlalchemy.events.SchemaEventTarget Base class for elements that are the targets of DDLEvents events. This includes SchemaItem as well as SchemaType.
3.10.1 Synopsis
Usage involves the creation of one or more ClauseElement subclasses and one or more callables dening its compilation: 3.10. Custom SQL Constructs and Compilation Extension 411
from sqlalchemy.ext.compiler import compiles from sqlalchemy.sql.expression import ColumnClause class MyColumn(ColumnClause): pass @compiles(MyColumn) def compile_mycolumn(element, compiler, **kw): return "[%s]" % element.name Above, MyColumn extends ColumnClause, the base expression element for named column objects. The compiles decorator registers itself with the MyColumn class so that it is invoked when the object is compiled to a string: from sqlalchemy import select s = select([MyColumn(x), MyColumn(y)]) print str(s) Produces: SELECT [x], [y]
@compiles(AlterColumn, postgresql) def visit_alter_column(element, compiler, **kw): return "ALTER TABLE %s ALTER COLUMN %s ..." % (element.table.name, element.column.name) The second visit_alter_table will be invoked when any postgresql dialect is used.
self.select = select @compiles(InsertFromSelect) def visit_insert_from_select(element, compiler, **kw): return "INSERT INTO %s (%s)" % ( compiler.process(element.table, asfrom=True), compiler.process(element.select) ) insert = InsertFromSelect(t1, select([t1]).where(t1.c.x>5)) print insert Produces:
"INSERT INTO mytable (SELECT mytable.x, mytable.y, mytable.z FROM mytable WHERE mytable.x > Note: The above InsertFromSelect construct probably wants to have autocommit enabled. See Enabling Autocommit on a Construct for this step. Cross Compiling between SQL and DDL compilers SQL and DDL constructs are each compiled using different base compilers - SQLCompiler and DDLCompiler. A common need is to access the compilation rules of SQL expressions from within a DDL expression. The DDLCompiler includes an accessor sql_compiler for this reason, such as below where we generate a CHECK constraint that embeds a SQL expression: @compiles(MyConstraint) def compile_my_constraint(constraint, ddlcompiler, **kw): return "CONSTRAINT %s CHECK (%s)" % ( constraint.name, ddlcompiler.sql_compiler.process(constraint.expression) )
from sqlalchemy.sql.expression import UpdateBase class MyInsertThing(UpdateBase): def __init__(self, ...): ... DDL elements that subclass DDLElement already have the autocommit ag turned on.
414
ColumnElement classes want to have a type member which is expressions return type. This can be established at the instance level in the constructor, or at the class level if its generally constant: class timestamp(ColumnElement): type = TIMESTAMP() FunctionElement - This is a hybrid of a ColumnElement and a from clause like object, and represents a SQL function or stored procedure type of call. Since most databases support statements along the line of SELECT FROM <some function> FunctionElement adds in the ability to be used in the FROM clause of a select() construct: from sqlalchemy.sql.expression import FunctionElement class coalesce(FunctionElement): name = coalesce @compiles(coalesce) def compile(element, compiler, **kw): return "coalesce(%s)" % compiler.process(element.clauses) @compiles(coalesce, oracle) def compile(element, compiler, **kw): if len(element.clauses) > 2: raise TypeError("coalesce only supports two arguments on Oracle") return "nvl(%s)" % compiler.process(element.clauses) DDLElement - The root of all DDL expressions, like CREATE TABLE, ALTER TABLE, etc. Compilation of DDLElement subclasses is issued by a DDLCompiler instead of a SQLCompiler. DDLElement also features Table and MetaData event hooks via the execute_at() method, allowing the construct to be invoked during CREATE TABLE and DROP TABLE sequences. Executable - This is a mixin which should be used with any expression class that represents a standalone SQL statement that can be passed directly to an execute() method. It is already implicit within DDLElement and FunctionElement.
415
return "TIMEZONE(utc, CURRENT_TIMESTAMP)" @compiles(utcnow, mssql) def ms_utcnow(element, compiler, **kw): return "GETUTCDATE()" Example usage: from sqlalchemy import ( Table, Column, Integer, String, DateTime, MetaData ) metadata = MetaData() event = Table("event", metadata, Column("id", Integer, primary_key=True), Column("description", String(50), nullable=False), Column("timestamp", DateTime, server_default=utcnow()) ) GREATEST function The GREATEST function is given any number of arguments and returns the one that is of the highest value - its equivalent to Pythons max function. A SQL standard version versus a CASE based version which only accommodates two arguments: from sqlalchemy.sql import expression from sqlalchemy.ext.compiler import compiles from sqlalchemy.types import Numeric class greatest(expression.FunctionElement): type = Numeric() name = greatest @compiles(greatest) def default_greatest(element, compiler, **kw): return compiler.visit_function(element) @compiles(greatest, sqlite) @compiles(greatest, mssql) @compiles(greatest, oracle) def case_greatest(element, compiler, **kw): arg1, arg2 = list(element.clauses) return "CASE WHEN %s > %s THEN %s ELSE %s END" % ( compiler.process(arg1), compiler.process(arg2), compiler.process(arg1), compiler.process(arg2), ) Example usage: Session.query(Account).\ filter( greatest( Account.checking_balance, Account.savings_balance) > 10000 ) 416 Chapter 3. SQLAlchemy Core
false expression Render a false constant expression, rendering as 0 on platforms that dont have a false constant: from sqlalchemy.sql import expression from sqlalchemy.ext.compiler import compiles class sql_false(expression.ColumnElement): pass @compiles(sql_false) def default_false(element, compiler, **kw): return "false" @compiles(sql_false, mssql) @compiles(sql_false, mysql) @compiles(sql_false, oracle) def int_false(element, compiler, **kw): return "0" Example usage: from sqlalchemy import select, union_all exp = union_all( select([users.c.name, sql_false().label("enrolled")]), select([customers.c.name, customers.c.enrolled]) )
417
Similar restrictions as when using raw pickle apply; mapped classes must be themselves be pickleable, meaning they are importable from a module-level namespace. The serializer module is only appropriate for query structures. It is not needed for: instances of user-dened classes. These contain no references to engines, sessions or expression constructs in the typical case and can be serialized directly. Table metadata that is to be loaded entirely from the serialized structure (i.e. is not already declared in the application). Regular pickle.loads()/dumps() can be used to fully dump any MetaData object, typically one which was reected from an existing database at some previous point in time. The serializer module is specically for the opposite case, where the Table metadata is already present in memory. sqlalchemy.ext.serializer.Serializer(*args, **kw) sqlalchemy.ext.serializer.Deserializer(le, metadata=None, scoped_session=None, engine=None) sqlalchemy.ext.serializer.dumps(obj, protocol=0) sqlalchemy.ext.serializer.loads(data, metadata=None, scoped_session=None, engine=None)
def cursor_execute(self, execute, cursor, statement, parameters, context, executema print "raw statement:", statement return execute(cursor, statement, parameters, context) The execute argument is a function that will fulll the default execution behavior for the operation. The signature illustrated in the example should be used. The proxy is installed into an Engine via the proxy argument: e = create_engine(someurl://, proxy=MyProxy()) begin(conn, begin) Intercept begin() events.
418
begin_twophase(conn, begin_twophase, xid) Intercept begin_twophase() events. commit(conn, commit) Intercept commit() events. commit_twophase(conn, commit_twophase, xid, is_prepared) Intercept commit_twophase() events. cursor_execute(execute, cursor, statement, parameters, context, executemany) Intercept low-level cursor execute() events. execute(conn, execute, clauseelement, *multiparams, **params) Intercept high level execute() events. prepare_twophase(conn, prepare_twophase, xid) Intercept prepare_twophase() events. release_savepoint(conn, release_savepoint, name, context) Intercept release_savepoint() events. rollback(conn, rollback) Intercept rollback() events. rollback_savepoint(conn, rollback_savepoint, name, context) Intercept rollback_savepoint() events. rollback_twophase(conn, rollback_twophase, xid, is_prepared) Intercept rollback_twophase() events. savepoint(conn, savepoint, name=None) Intercept savepoint() events.
419
For any given DB-API connection, there will be one connect event, n number of checkout events, and either n or n - 1 checkin events. (If a Connection is detached from its pool via the detach() method, it wont be checked back in.) These are low-level events for low-level objects: raw Python DB-API connections, without the conveniences of the SQLAlchemy Connection wrapper, Dialect services or ClauseElement execution. If you execute SQL through the connection, explicitly closing all cursors and other resources is recommended. Events also receive a _ConnectionRecord, a long-lived internal Pool object that basically represents a slot in the connection pool. _ConnectionRecord objects have one public attribute of note: info, a dictionary whose contents are scoped to the lifetime of the DB-API connection managed by the record. You can use this shared storage area however you like. There is no need to subclass PoolListener to handle events. Any class that implements one or more of these methods can be used as a pool listener. The Pool will inspect the methods provided by a listener object and add the listener to one or more internal event queues based on its capabilities. In terms of efciency and function call overhead, youre much better off only providing implementations for the hooks youll be using. checkin(dbapi_con, con_record) Called when a connection returns to the pool. Note that the connection may be closed, and may be None if the connection has been invalidated. checkin will not be called for detached connections. (They do not return to the pool.) dbapi_con A raw DB-API connection con_record The _ConnectionRecord that persistently manages the connection checkout(dbapi_con, con_record, con_proxy) Called when a connection is retrieved from the Pool. dbapi_con A raw DB-API connection con_record The _ConnectionRecord that persistently manages the connection con_proxy The _ConnectionFairy which manages the connection for the span of the current checkout. If you raise an exc.DisconnectionError, the current connection will be disposed and a fresh connection retrieved. Processing of all checkout listeners will abort and restart using the new connection. connect(dbapi_con, con_record) Called once for each new DB-API connection or Pools creator(). dbapi_con A newly connected raw DB-API connection (not a SQLAlchemy Connection wrapper). con_record The _ConnectionRecord that persistently manages the connection first_connect(dbapi_con, con_record) Called exactly once for the rst DB-API connection. dbapi_con A newly connected raw DB-API connection (not a SQLAlchemy Connection wrapper). con_record The _ConnectionRecord that persistently manages the connection
420
exception sqlalchemy.exc.ArgumentError Bases: sqlalchemy.exc.SQLAlchemyError Raised when an invalid or conicting function argument is supplied. This error generally corresponds to construction time state errors. exception sqlalchemy.exc.CircularDependencyError(message, cycles, edges) Bases: sqlalchemy.exc.SQLAlchemyError Raised by topological sorts when a circular dependency is detected. There are two scenarios where this error occurs: In a Session ush operation, if two objects are mutually dependent on each other, they can not be inserted or deleted via INSERT or DELETE statements alone; an UPDATE will be needed to post-associate or pre-deassociate one of the foreign key constrained values. The post_update ag described at Rows that point to themselves / Mutually Dependent Rows can resolve this cycle. In a MetaData.create_all(), MetaData.drop_all(), MetaData.sorted_tables operation, two ForeignKey or ForeignKeyConstraint objects mutually refer to each other. Apply the use_alter=True ag to one or both, see Creating/Dropping Foreign Key Constraints via ALTER. exception sqlalchemy.exc.CompileError Bases: sqlalchemy.exc.SQLAlchemyError Raised when an error occurs during SQL compilation exception sqlalchemy.exc.DBAPIError(statement, params, orig, connection_invalidated=False) Bases: sqlalchemy.exc.StatementError Raised when the execution of a database operation fails. DBAPIError wraps exceptions raised by the DB-API underlying the database operation. Driver-specic implementations of the standard DB-API exception types are wrapped by matching sub-types of SQLAlchemys DBAPIError when possible. DB-APIs Error type maps to DBAPIError in SQLAlchemy, otherwise the names are identical. Note that there is no guarantee that different DB-API implementations will raise the same exception type for any given error condition. DBAPIError features statement and params attributes which supply context regarding the specics of the statement which had an issue, for the typical case when the error was raised within the context of emitting a SQL statement. The wrapped exception object is available in the orig attribute. Its type and properties are DB-API implementation specic. exception sqlalchemy.exc.DataError(statement, params, orig, connection_invalidated=False) Bases: sqlalchemy.exc.DatabaseError Wraps a DB-API DataError. exception sqlalchemy.exc.DatabaseError(statement, params, orig, connection_invalidated=False) Bases: sqlalchemy.exc.DBAPIError Wraps a DB-API DatabaseError. exception sqlalchemy.exc.DisconnectionError Bases: sqlalchemy.exc.SQLAlchemyError A disconnect is detected on a raw DB-API connection. This error is raised and consumed internally by a connection pool. It can be raised by a PoolListener so that the host pool forces a disconnect. class sqlalchemy.exc.DontWrapMixin Bases: object
421
A mixin class which, when applied to a user-dened Exception class, will not be wrapped inside of StatementError if the error is emitted within the process of executing a statement. E.g.:: from sqlalchemy.exc import DontWrapMixin class MyCustomException(Exception, DontWrapMixin): pass class MySpecialType(TypeDecorator): impl = String def process_bind_param(self, value, dialect): if value == invalid: raise MyCustomException(invalid!) exception sqlalchemy.exc.IdentifierError Bases: sqlalchemy.exc.SQLAlchemyError Raised when a schema name is beyond the max character limit exception sqlalchemy.exc.IntegrityError(statement, params, tion_invalidated=False) Bases: sqlalchemy.exc.DatabaseError Wraps a DB-API IntegrityError. exception sqlalchemy.exc.InterfaceError(statement, params, tion_invalidated=False) Bases: sqlalchemy.exc.DBAPIError Wraps a DB-API InterfaceError. exception sqlalchemy.exc.InternalError(statement, params, orig, connection_invalidated=False) Bases: sqlalchemy.exc.DatabaseError Wraps a DB-API InternalError. exception sqlalchemy.exc.InvalidRequestError Bases: sqlalchemy.exc.SQLAlchemyError SQLAlchemy was asked to do something it cant do. This error generally corresponds to runtime state errors. exception sqlalchemy.exc.NoReferenceError Bases: sqlalchemy.exc.InvalidRequestError Raised by ForeignKey to indicate a reference cannot be resolved. exception sqlalchemy.exc.NoReferencedColumnError(message, tname, cname) Bases: sqlalchemy.exc.NoReferenceError Raised by ForeignKey when the referred Column cannot be located. exception sqlalchemy.exc.NoReferencedTableError(message, tname) Bases: sqlalchemy.exc.NoReferenceError Raised by ForeignKey when the referred Table cannot be located. exception sqlalchemy.exc.NoSuchColumnError Bases: exceptions.KeyError, sqlalchemy.exc.InvalidRequestError A nonexistent column is requested from a RowProxy. exception sqlalchemy.exc.NoSuchTableError Bases: sqlalchemy.exc.InvalidRequestError Table does not exist or is not visible to a connection. orig, connecorig, connec-
422
exception sqlalchemy.exc.NotSupportedError(statement, params, tion_invalidated=False) Bases: sqlalchemy.exc.DatabaseError Wraps a DB-API NotSupportedError. exception sqlalchemy.exc.OperationalError(statement, params, tion_invalidated=False) Bases: sqlalchemy.exc.DatabaseError Wraps a DB-API OperationalError. exception sqlalchemy.exc.ProgrammingError(statement, params, tion_invalidated=False) Bases: sqlalchemy.exc.DatabaseError Wraps a DB-API ProgrammingError. exception sqlalchemy.exc.ResourceClosedError Bases: sqlalchemy.exc.InvalidRequestError
orig,
connec-
orig,
connec-
orig,
connec-
An operation was requested from a connection, cursor, or other object thats in a closed state. exception sqlalchemy.exc.SADeprecationWarning Bases: exceptions.DeprecationWarning Issued once per usage of a deprecated API. exception sqlalchemy.exc.SAPendingDeprecationWarning Bases: exceptions.PendingDeprecationWarning Issued once per usage of a deprecated API. exception sqlalchemy.exc.SAWarning Bases: exceptions.RuntimeWarning Issued at runtime. exception sqlalchemy.exc.SQLAlchemyError Bases: exceptions.Exception Generic error class. exception sqlalchemy.exc.StatementError(message, statement, params, orig) Bases: sqlalchemy.exc.SQLAlchemyError An error occurred during execution of a SQL statement. StatementError wraps the exception raised during execution, and features statement and params attributes which supply context regarding the specics of the statement which had an issue. The wrapped exception object is available in the orig attribute. exception sqlalchemy.exc.TimeoutError Bases: sqlalchemy.exc.SQLAlchemyError Raised when a connection pool times out on getting a connection. exception sqlalchemy.exc.UnboundExecutionError Bases: sqlalchemy.exc.InvalidRequestError SQL was attempted without a database connection to execute it on.
423
424
Default implementation of Dialect connect(*cargs, **cparams) create_connect_args(url) create_xid() Create a random two-phase transaction ID. This id will be passed to do_begin_twophase(), do_rollback_twophase(), do_commit_twophase(). Its format is unspecied. ddl_compiler alias of DDLCompiler dialect_description do_begin(connection) Implementations might want to put logic here for turning autocommit on/off, etc. do_commit(connection) Implementations might want to put logic here for turning autocommit on/off, etc. do_execute(cursor, statement, parameters, context=None) do_executemany(cursor, statement, parameters, context=None) do_release_savepoint(connection, name) do_rollback(connection) Implementations might want to put logic here for turning autocommit on/off, etc. do_rollback_to_savepoint(connection, name) do_savepoint(connection, name) execute_sequence_format alias of tuple execution_ctx_cls alias of DefaultExecutionContext get_pk_constraint(conn, table_name, schema=None, **kw) Compatiblity method, adapts the result of get_primary_keys() for those dialects which dont implement get_pk_constraint(). classmethod get_pool_class(url) initialize(connection) is_disconnect(e, connection, cursor) on_connect() return a callable which sets up a newly created DBAPI connection. This is used to set dialect-wide per-connection options such as isolation modes, unicode modes, etc. If a callable is returned, it will be assembled into a pool listener that receives the direct DBAPI connection, with all wrappers removed. If None is returned, no listener will be generated. preparer alias of IdentifierPreparer reflecttable(connection, table, include_columns)
425
reset_isolation_level(dbapi_conn) statement_compiler alias of SQLCompiler type_compiler alias of GenericTypeCompiler type_descriptor(typeobj) Provide a database-specic TypeEngine object, given the generic object which comes from the types module. This method looks for a dictionary called colspecs as a class or instance-level variable, and passes on to types.adapt_type(). validate_identifier(ident) class sqlalchemy.engine.base.Dialect Bases: object Dene the behavior of a specic database and DB-API combination. Any aspect of metadata denition, SQL query generation, execution, result-set handling, or anything else which varies between databases is dened under the general category of the Dialect. The Dialect acts as a factory for other database-specic object implementations including ExecutionContext, Compiled, DefaultGenerator, and TypeEngine. All Dialects implement the following attributes: name identifying name for the dialect from a DBAPI-neutral point of view (i.e. sqlite) driver identifying name for the dialects DBAPI positional True if the paramstyle for this Dialect is positional. paramstyle the paramstyle to be used (some DB-APIs support multiple paramstyles). convert_unicode True if Unicode conversion should be applied to all str types. encoding type of encoding to use for unicode, usually defaults to utf-8. statement_compiler a Compiled class used to compile SQL statements ddl_compiler a Compiled class used to compile DDL statements server_version_info a tuple containing a version number for the DB backend in use. This value is only available for supporting dialects, and is typically populated during the initial connection to the database. default_schema_name the name of the default schema. This value is only available for supporting dialects, and is typically populated during the initial connection to the database. execution_ctx_cls a ExecutionContext class used to handle statement execution execute_sequence_format either the tuple or list type, depending on what cursor.execute() accepts for the second argument (they vary). preparer a IdentifierPreparer class used to quote identiers. supports_alter True if the database supports ALTER TABLE. max_identier_length The maximum length of identier names. supports_unicode_statements Indicate whether the DB-API can receive SQL statements as Python unicode strings supports_unicode_binds Indicate whether the DB-API can receive string bind parameters as Python unicode strings supports_sane_rowcount Indicate whether the dialect properly implements rowcount for UPDATE and DELETE statements. supports_sane_multi_rowcount Indicate whether the dialect properly implements rowcount for UPDATE and DELETE statements when executed via executemany. preexecute_autoincrement_sequences True if implicit primary key functions must be executed separately in order to get their value. This is currently oriented towards Postgresql. implicit_returning use RETURNING or equivalent during INSERT execution in order to load newly generated primary keys and other column defaults in one execution, which are then available via in-
426
serted_primary_key. If an insert statement has returning() specied explicitly, the implicit functionality is not used and inserted_primary_key will not be available. dbapi_type_map A mapping of DB-API type objects present in this Dialects DB-API implementation mapped to TypeEngine implementations used by the dialect. This is used to apply types to result sets based on the DB-API types present in cursor.description; it only takes effect for result sets against textual statements where no explicit typemap was present. colspecs A dictionary of TypeEngine classes from sqlalchemy.types mapped to subclasses that are specic to the dialect class. This dictionary is class-level only and is not accessed from the dialect instance itself. supports_default_values Indicates if the construct INSERT INTO tablename DEFAULT VALUES is supported supports_sequences Indicates if the dialect supports CREATE SEQUENCE or similar. sequences_optional If True, indicates if the optional ag on the Sequence() construct should signal to not generate a CREATE SEQUENCE. Applies only to dialects that support sequences. Currently used only to allow Postgresql SERIAL to be used on a column that species Sequence() for usage on other backends. supports_native_enum Indicates if the dialect supports a native ENUM construct. This will prevent types.Enum from generating a CHECK constraint when that type is used. supports_native_boolean Indicates if the dialect supports a native boolean construct. This will prevent types.Boolean from generating a CHECK constraint when that type is used. connect() return a callable which sets up a newly created DBAPI connection. The callable accepts a single argument conn which is the DBAPI connection itself. It has no return value. This is used to set dialect-wide per-connection options such as isolation modes, unicode modes, etc. If a callable is returned, it will be assembled into a pool listener that receives the direct DBAPI connection, with all wrappers removed. If None is returned, no listener will be generated. create_connect_args(url) Build DB-API compatible connection arguments. Given a URL object, returns a tuple consisting of a *args/**kwargs suitable to send directly to the dbapis connect function. create_xid() Create a two-phase transaction ID. This id will be passed to do_begin_twophase(), do_rollback_twophase(), do_commit_twophase(). Its format is unspecied. denormalize_name(name) convert the given name to a case insensitive identier for the backend if it is an all-lowercase name. this method is only used if the dialect denes requires_name_normalize=True. do_begin(connection) Provide an implementation of connection.begin(), given a DB-API connection. do_begin_twophase(connection, xid) Begin a two phase transaction on the given connection. do_commit(connection) Provide an implementation of connection.commit(), given a DB-API connection. do_commit_twophase(connection, xid, is_prepared=True, recover=False) Commit a two phase transaction on the given connection.
427
do_execute(cursor, statement, parameters, context=None) Provide an implementation of cursor.execute(statement, parameters). do_executemany(cursor, statement, parameters, context=None) Provide an implementation of cursor.executemany(statement, parameters). do_prepare_twophase(connection, xid) Prepare a two phase transaction on the given connection. do_recover_twophase(connection) Recover list of uncommited prepared two phase transaction identiers on the given connection. do_release_savepoint(connection, name) Release the named savepoint on a SQL Alchemy connection. do_rollback(connection) Provide an implementation of connection.rollback(), given a DB-API connection. do_rollback_to_savepoint(connection, name) Rollback a SQL Alchemy connection to the named savepoint. do_rollback_twophase(connection, xid, is_prepared=True, recover=False) Rollback a two phase transaction on the given connection. do_savepoint(connection, name) Create a savepoint with the given name on a SQLAlchemy connection. get_columns(connection, table_name, schema=None, **kw) Return information about columns in table_name. Given a Connection, a string table_name, and an optional string schema, return column information as a list of dictionaries with these keys: name the columns name type [sqlalchemy.types#TypeEngine] nullable boolean default the columns default value autoincrement boolean sequence a dictionary of the form {name : str, start :int, increment: int} Additional column attributes may be present. get_foreign_keys(connection, table_name, schema=None, **kw) Return information about foreign_keys in table_name. Given a Connection, a string table_name, and an optional string schema, return foreign key information as a list of dicts with these keys: name the constraints name constrained_columns a list of column names that make up the foreign key referred_schema the name of the referred schema referred_table the name of the referred table referred_columns a list of column names in the referred table that correspond to constrained_columns
428
get_indexes(connection, table_name, schema=None, **kw) Return information about indexes in table_name. Given a Connection, a string table_name and an optional string schema, return index information as a list of dictionaries with these keys: name the indexs name column_names list of column names in order unique boolean get_isolation_level(dbapi_conn) Given a DBAPI connection, return its isolation level. get_pk_constraint(table_name, schema=None, **kw) Return information about the primary key constraint on table_name. Given a string table_name, and an optional string schema, return primary key information as a dictionary with these keys: constrained_columns a list of column names that make up the primary key name optional name of the primary key constraint. get_primary_keys(connection, table_name, schema=None, **kw) Return information about primary keys in table_name. Given a Connection, a string table_name, and an optional string schema, return primary key information as a list of column names. get_table_names(connection, schema=None, **kw) Return a list of table names for schema. get_view_definition(connection, view_name, schema=None, **kw) Return view denition. Given a Connection, a string view_name, and an optional string schema, return the view denition. get_view_names(connection, schema=None, **kw) Return a list of all view names available in the database. schema: Optional, retrieve names from a non-default schema. has_sequence(connection, sequence_name, schema=None) Check the existence of a particular sequence in the database. Given a Connection object and a string sequence_name, return True if the given sequence exists in the database, False otherwise. has_table(connection, table_name, schema=None) Check the existence of a particular table in the database. Given a Connection object and a string table_name, return True if the given table (possibly within the specied schema) exists in the database, False otherwise. initialize(connection) Called during strategized creation of the dialect with a connection. Allows dialects to congure options based on server version info or other properties. The connection passed here is a SQLAlchemy Connection object, with full capabilities. The initalize() method of the base dialect should be called via super().
429
is_disconnect(e, connection, cursor) Return True if the given DB-API error indicates an invalid connection normalize_name(name) convert the given name to lowercase if it is detected as case insensitive. this method is only used if the dialect denes requires_name_normalize=True. reflecttable(connection, table, include_columns=None) Load table description from the database. Given a Connection and a Table object, reect its columns and properties from the database. If include_columns (a list or set) is specied, limit the autoload to the given column names. The default implementation uses the Inspector interface to provide the output, building upon the granular table/column/ constraint etc. methods of Dialect. reset_isolation_level(dbapi_conn) Given a DBAPI connection, revert its isolation to the default. set_isolation_level(dbapi_conn, level) Given a DBAPI connection, set its isolation level. classmethod type_descriptor(typeobj) Transform a generic type to a dialect-specic type. Dialect classes will usually use the adapt_type() function in the types module to make this job easy. The returned result is cached per dialect class so can contain no dialect-instance state. class sqlalchemy.engine.default.DefaultExecutionContext Bases: sqlalchemy.engine.base.ExecutionContext connection create_cursor() get_insert_default(column) get_lastrowid() return self.cursor.lastrowid, or equivalent, after an INSERT. This may involve calling special cursor functions, issuing a new SELECT on the cursor (or a new one), or returning a stored value that was calculated within post_exec(). This function will only be called for dialects which support implicit primary key generation, keep preexecute_autoincrement_sequences set to False, and when no explicit id value was bound to the statement. The function is called once, directly after post_exec() and before the transaction is committed or ResultProxy is generated. If the post_exec() method assigns a value to self._lastrowid, the value is used in place of calling get_lastrowid(). Note that this method is not equivalent to the lastrowid method on ResultProxy, which is a direct proxy to the DBAPI lastrowid accessor in all cases. get_result_proxy() get_update_default(column) handle_dbapi_exception(e) is_crud lastrow_has_defaults() post_exec()
430
post_insert() pre_exec() rowcount set_input_sizes(translate=None, exclude_types=None) Given a cursor and ClauseParameters, call the appropriate style of setinputsizes() on the cursor, using DB-API types from the bind parameters TypeEngine objects. This method only called by those dialects which require it, currently cx_oracle. should_autocommit should_autocommit_text(statement) supports_sane_multi_rowcount() supports_sane_rowcount() class sqlalchemy.engine.base.ExecutionContext Bases: object A messenger object for a Dialect that corresponds to a single execution. ExecutionContext should have these data members: connection Connection object which can be freely used by default value generators to execute SQL. This Connection should reference the same underlying connection/transactional resources of root_connection. root_connection Connection object which is the source of this ExecutionContext. This Connection may have close_with_result=True set, in which case it can only be used once. dialect dialect which created this ExecutionContext. cursor DB-API cursor procured from the connection, compiled if passed to constructor, sqlalchemy.engine.base.Compiled object being executed, statement string version of the statement to be executed. Is either passed to the constructor, or must be created from the sql.Compiled object by the time pre_exec() has completed. parameters bind parameters passed to the execute() method. For compiled statements, this is a dictionary or list of dictionaries. For textual statements, it should be in a format suitable for the dialects paramstyle (i.e. dict or list of dicts for non positional, list or list of lists/tuples for positional). isinsert True if the statement is an INSERT. isupdate True if the statement is an UPDATE. should_autocommit True if the statement is a committable statement. postfetch_cols a list of Column objects for which a server-side default or inline SQL expression value was red off. Applies to inserts and updates. create_cursor() Return a new cursor generated from this ExecutionContexts connection. Some dialects may wish to change the behavior of connection.cursor(), such as postgresql which may return a PG server side cursor. get_rowcount() Return the number of rows produced (by a SELECT query) or affected (by an INSERT/UPDATE/DELETE statement). Note that this row count may not be properly implemented in some dialects; this is indicated by the supports_sane_rowcount and supports_sane_multi_rowcount dialect attributes. handle_dbapi_exception(e) Receive a DBAPI exception which occurred upon execute, result fetch, etc. lastrow_has_defaults() Return True if the last INSERT or UPDATE row contained inlined or database-side defaults.
431
post_exec() Called after the execution of a compiled statement. If a compiled statement was passed to this ExecutionContext, the last_insert_ids, last_inserted_params, etc. datamembers should be available after this method completes. pre_exec() Called before an execution of a compiled statement. If a compiled statement was passed to this ExecutionContext, the statement and parameters datamembers must be initialized after this statement is complete. result() Return a result object corresponding to this ExecutionContext. Returns a ResultProxy. should_autocommit_text(statement) Parse the given textual statement and return True if it refers to a committable statement class sqlalchemy.sql.compiler.IdentifierPreparer(dialect, initial_quote=, nal_quote=None, escape_quote=, omit_schema=False) Bases: object Handle quoting and case-folding of identiers based on options. __init__(dialect, initial_quote=, nal_quote=None, escape_quote=, omit_schema=False) Construct a new IdentifierPreparer object. initial_quote Character that begins a delimited identier. nal_quote Character that ends a delimited identier. Defaults to initial_quote. omit_schema Prevent prepending schema name. Useful for databases that do not support schemae. format_column(column, use_table=False, name=None, table_name=None) Prepare a quoted column name. format_table(table, use_schema=True, name=None) Prepare a quoted table and schema name. format_table_seq(table, use_schema=True) Format table name and schema as a tuple. quote_identifier(value) Quote an identier. Subclasses should override this to provide database-dependent quoting behavior. quote_schema(schema, force) Quote a schema. Subclasses should override this to provide database-dependent quoting behavior. unformat_identifiers(identiers) Unpack schema.table.column-like strings into components. class sqlalchemy.sql.compiler.SQLCompiler(dialect, statement, column_keys=None, line=False, **kwargs) Bases: sqlalchemy.engine.base.Compiled Default implementation of Compiled. Compiles ClauseElements into SQL strings. Uses a similar visit paradigm as visitors.ClauseVisitor but implements its own traversal. 432 Chapter 3. SQLAlchemy Core in-
__init__(dialect, statement, column_keys=None, inline=False, **kwargs) Construct a new DefaultCompiler object. dialect Dialect to be used statement ClauseElement to be compiled column_keys a list of column names to be compiled into an INSERT or UPDATE statement. construct_params(params=None, _group_number=None) return a dictionary of bind parameter keys and values default_from() Called when a SELECT statement has no froms, and no FROM clause is to be appended. Gives Oracle a chance to tack on a FROM DUAL to the string output. escape_literal_column(text) provide escaping for the literal_column() construct. get_select_precolumns(select) Called when building a SELECT statement, position is just before column list. label_select_column(select, column, asfrom) label columns present in a select(). params Return the bind params for this compiled object. render_literal_value(value, type_) Render the value of a bind parameter as a quoted literal. This is used for statement sections that do not accept bind paramters on the target driver/database. This should be implemented by subclasses using the quoting services of the DBAPI.
433
434
CHAPTER
FOUR
DIALECTS
The dialect is the system SQLAlchemy uses to communicate with various types of DBAPIs and databases. A compatibility chart of supported backends can be found at Supported Databases. The sections that follow contain reference documentation and notes specic to the usage of each backend, as well as notes for the various DBAPIs. Note that not all backends are fully ported and tested with current versions of SQLAlchemy. The compatibility chart should be consulted to check for current support level.
4.1 Drizzle
Support for the Drizzle database.
See the ofcial Drizzle documentation for detailed information about features supported in any given server release.
4.1.2 Connecting
See the API documentation on individual drivers for details on connecting.
435
4.1.5 Keys
Not all Drizzle storage engines support foreign keys. For BlitzDB and similar engines, the information loaded by table reection will not include foreign keys. For these tables, you may supply a ForeignKeyConstraint at reection time: Table(mytable, metadata, ForeignKeyConstraint([other_id], [othertable.other_id]), autoload=True ) When creating tables, SQLAlchemy will automatically set AUTO_INCREMENT on an integer primary key column: >>> t = Table(mytable, metadata, ... Column(mytable_id, Integer, primary_key=True) ... ) >>> t.create() CREATE TABLE mytable ( id INTEGER NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ) You can disable this behavior by supplying autoincrement=False to the Column. This ag can also be used to enable auto-increment on a secondary column in a multi-column key for some storage engines: Table(mytable, metadata, Column(gid, Integer, primary_key=True, autoincrement=False), Column(id, Integer, primary_key=True) )
4.1. Drizzle
437
Drizzle DOUBLE type. __init__(precision=None, scale=None, asdecimal=True, **kw) Construct a DOUBLE. Parameters precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. class sqlalchemy.dialects.drizzle.ENUM(*enums, **kw) Bases: sqlalchemy.dialects.mysql.base.ENUM Drizzle ENUM type. __init__(*enums, **kw) Construct an ENUM. Example: Column(myenum, ENUM(foo, bar, baz)) Parameters enums The range of valid values for this ENUM. Values will be quoted when generating the schema according to the quoting ag (see below). strict Defaults to False: ensure that a given value is in this ENUMs range of permissible values when inserting or updating rows. Note that Drizzle will not raise a fatal error if you attempt to store an out of range value- an alternate value will be stored instead. (See Drizzle ENUM documentation.) collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data. quoting Defaults to auto: automatically determine enum value quoting. If all enum values are surrounded by the same quoting character, then use quoted mode. Otherwise, use unquoted mode. quoted: values in enums are already quoted, they will be used directly when generating the schema - this usage is deprecated. unquoted: values in enums are not quoted, they will be escaped and surrounded by single quotes when generating the schema. Previous versions of this type always required manually quoted values to be supplied; future versions will always quote the string literals for you. This is a transitional option. class sqlalchemy.dialects.drizzle.FLOAT(precision=None, scale=None, asdecimal=False, **kw) Bases: sqlalchemy.dialects.drizzle.base._FloatType, sqlalchemy.types.FLOAT Drizzle FLOAT type. __init__(precision=None, scale=None, asdecimal=False, **kw) Construct a FLOAT.
438
Chapter 4. Dialects
Parameters precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. class sqlalchemy.dialects.drizzle.INTEGER(**kw) Bases: sqlalchemy.types.INTEGER Drizzle INTEGER type. __init__(**kw) Construct an INTEGER. class sqlalchemy.dialects.drizzle.NUMERIC(precision=None, scale=None, asdecimal=True, **kw) Bases: sqlalchemy.dialects.drizzle.base._NumericType, sqlalchemy.types.NUMERIC Drizzle NUMERIC type. __init__(precision=None, scale=None, asdecimal=True, **kw) Construct a NUMERIC. Parameters precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. class sqlalchemy.dialects.drizzle.REAL(precision=None, scale=None, asdecimal=True, **kw) Bases: sqlalchemy.dialects.drizzle.base._FloatType, sqlalchemy.types.REAL Drizzle REAL type. __init__(precision=None, scale=None, asdecimal=True, **kw) Construct a REAL. Parameters precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. class sqlalchemy.dialects.drizzle.TEXT(length=None, **kw) Bases: sqlalchemy.dialects.drizzle.base._StringType, sqlalchemy.types.TEXT Drizzle TEXT type, for text up to 2^16 characters. __init__(length=None, **kw) Construct a TEXT. Parameters length Optional, if provided the server may optimize storage by substituting the smallest TEXT type sufcient to store length characters. collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data.
4.1. Drizzle
439
class sqlalchemy.dialects.drizzle.TIMESTAMP(timezone=False) Bases: sqlalchemy.types.TIMESTAMP Drizzle TIMESTAMP type. class sqlalchemy.dialects.drizzle.VARCHAR(length=None, **kwargs) Bases: sqlalchemy.dialects.drizzle.base._StringType, sqlalchemy.types.VARCHAR Drizzle VARCHAR type, for variable-length character data. __init__(length=None, **kwargs) Construct a VARCHAR. Parameters collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data.
4.2 Firebird
Support for the Firebird database. Connectivity is usually supplied via the kinterbasdb DBAPI module.
440
Chapter 4. Dialects
4.2.1 Dialects
Firebird offers two distinct dialects (not to be confused with a SQLAlchemy Dialect): dialect 1 This is the old syntax and behaviour, inherited from Interbase pre-6.0. dialect 3 This is the newer and supported syntax, introduced in Interbase 6.0. The SQLAlchemy Firebird dialect detects these versions and adjusts its representation of SQL accordingly. However, support for dialect 1 is not well tested and probably has incompatibilities.
4.2.4 kinterbasdb
The most common way to connect to a Firebird engine is implemented by kinterbasdb, currently maintained directly by the Firebird people.
The connection URL is of the form firebird[+kinterbasdb]://user:password@host:port/path/to/db[?key=val Kinterbasedb backend specic keyword arguments are: type_conv - select the kind of mapping done on the types: by default SQLAlchemy uses 200 with Unicode, datetime and decimal support (see details). concurrency_level - set the backend policy with regards to threading issues: by default SQLAlchemy uses policy 1 (see details).
4.2. Firebird
441
enable_rowcount - True by default, setting this to False disables the usage of cursor.rowcount with the Kinterbasdb dialect, which SQLAlchemy ordinarily calls upon automatically after any UPDATE or DELETE statement. When disabled, SQLAlchemys ResultProxy will return -1 for result.rowcount. The rationale here is that Kinterbasdb requires a second round trip to the database when .rowcount is called - since SQLAs resultproxy automatically closes the cursor after a non-result-returning statement, rowcount must be called, if at all, before the result object is returned. Additionally, cursor.rowcount may not return correct results with older versions of Firebird, and setting this ag to False will also cause the SQLAlchemy ORM to ignore its usage. The behavior can also be controlled on a per-execution basis using the enable_rowcount option with execution_options(): conn = engine.connect().execution_options(enable_rowcount=True) r = conn.execute(stmt) print r.rowcount
4.3 Informix
Support for the Informix database. This dialect is mostly functional as of SQLAlchemy 0.6.5.
4.4 MaxDB
Support for the MaxDB database. This dialect is not ported to SQLAlchemy 0.6 or 0.7. This dialect is not tested on SQLAlchemy 0.6 or 0.7.
4.4.1 Overview
The maxdb dialect is experimental and has only been tested on 7.6.03.007 and 7.6.00.037. Of these, only 7.6.03.007 will work with SQLAlchemys ORM. The earlier version has severe LEFT JOIN limitations and will return incorrect results from even very simple ORM queries. Only the native Python DB-API is currently supported. ODBC driver support is a future enhancement.
442
Chapter 4. Dialects
4.4.2 Connecting
The username is case-sensitive. If you usually connect to the database with sqlcli and other tools in lower case, you likely need to use upper case for DB-API.
4.4. MaxDB
443
sapdb.dbapi As of 2007-10-22 the Python 2.4 and 2.5 compatible versions of the DB-API are no longer available. A forum posting at SAP states that the Python driver will be available again in the future. The last release from MySQL AB works if you can nd it. sequence.NEXTVAL skips every other value! No rowcount for executemany() If an INSERT into a table with a DEFAULT SERIAL column inserts the results of a function INSERT INTO t VALUES (LENGTH(foo)), the cursor wont have the serial id. It needs to be manually yanked from tablename.CURRVAL. Super-duper picky about where bind params can be placed. Not smart about converting Python types for some functions, such as MOD(5, ?). LONG (text, binary) values in result sets are read-once. The dialect uses a caching RowProxy when these types are present. Connection objects seem like they want to be either close()d or garbage collected, but not both. Theres a warning issued but it seems harmless.
4.6.1 Connecting
See the individual driver sections below for details on connecting.
444
Chapter 4. Dialects
CREATE TABLE test ( id INTEGER NOT NULL IDENTITY(100,10) PRIMARY KEY, name VARCHAR(20) NULL, ) Note that the start and increment values for sequences are optional and will default to 1,1. Implicit autoincrement behavior works the same in MSSQL as it does in other dialects and results in an IDENTITY column. Support for SET IDENTITY_INSERT ON mode (automagic on / off for INSERT s) Support for auto-fetching of @@IDENTITY/@@SCOPE_IDENTITY() on INSERT
4.6.5 Nullability
MSSQL has support for three levels of column nullability. The default nullability allows nulls and is explicit in the CREATE TABLE construct: name VARCHAR(20) NULL If nullable=None is specied then no specication is made. In other words the databases congured default is used. This will render: name VARCHAR(20) If nullable is True or False then the column will be NULL or NOT NULL respectively.
445
4.6.8 Triggers
SQLAlchemy by default uses OUTPUT INSERTED to get at newly generated primary key values via IDENTITY columns or other server side defaults. MS-SQL does not allow the usage of OUTPUT INSERTED on tables that have triggers. To disable the usage of OUTPUT INSERTED on a per-table basis, specify implicit_returning=False for each Table which has triggers: Table(mytable, metadata, Column(id, Integer, primary_key=True), # ..., implicit_returning=False ) Declarative form: class MyClass(Base): # ... __table_args__ = {implicit_returning:False} This option can also be specied engine-wide using the implicit_returning=False argument on create_engine().
446
Chapter 4. Dialects
Bytestrings are encoded using the dialects encoding, which defaults to utf-8. If False, may be overridden by sqlalchemy.engine.base.Dialect.convert_unicode. collation Optional, a column-level collation for this string value. Accepts a Windows Collation Name or a SQL Collation Name. class sqlalchemy.dialects.mssql.DATETIME2(precision=None, **kw) Bases: sqlalchemy.dialects.mssql.base._DateTimeBase, sqlalchemy.types.DateTime class sqlalchemy.dialects.mssql.DATETIMEOFFSET(precision=None, **kwargs) Bases: sqlalchemy.types.TypeEngine class sqlalchemy.dialects.mssql.IMAGE(length=None) Bases: sqlalchemy.types.LargeBinary __init__(length=None) Construct a LargeBinary type. Parameters length optional, a length for the column for use in DDL statements, for those BLOB types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY type - use the BINARY/VARBINARY types specically for those. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. class sqlalchemy.dialects.mssql.MONEY(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine __init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.mssql.NCHAR(length=None, collation=None, **kw) Bases: sqlalchemy.dialects.mssql.base._StringType, sqlalchemy.types.NCHAR MSSQL NCHAR type. For xed-length unicode character data up to 4,000 characters. __init__(length=None, collation=None, **kw) Construct an NCHAR. Parameters length Optional, Maximum data length, in characters. collation Optional, a column-level collation for this string value. Accepts a Windows Collation Name or a SQL Collation Name. class sqlalchemy.dialects.mssql.NTEXT(length=None, collation=None, **kw) Bases: sqlalchemy.dialects.mssql.base._StringType, sqlalchemy.types.UnicodeText MSSQL NTEXT type, for variable-length unicode text up to 2^30 characters. __init__(length=None, collation=None, **kw) Construct a NTEXT. Parameters collation Optional, a column-level collation for this string value. Accepts a Windows Collation Name or a SQL Collation Name. class sqlalchemy.dialects.mssql.NVARCHAR(length=None, collation=None, **kw) Bases: sqlalchemy.dialects.mssql.base._StringType, sqlalchemy.types.NVARCHAR MSSQL NVARCHAR type.
448
Chapter 4. Dialects
For variable-length unicode character data up to 4,000 characters. __init__(length=None, collation=None, **kw) Construct a NVARCHAR. Parameters length Optional, Maximum data length, in characters. collation Optional, a column-level collation for this string value. Accepts a Windows Collation Name or a SQL Collation Name. class sqlalchemy.dialects.mssql.REAL(**kw) Bases: sqlalchemy.types.REAL class sqlalchemy.dialects.mssql.SMALLDATETIME(timezone=False) Bases: sqlalchemy.dialects.mssql.base._DateTimeBase, sqlalchemy.types.DateTime class sqlalchemy.dialects.mssql.SMALLMONEY(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine __init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.mssql.SQL_VARIANT(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine __init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.mssql.TEXT(length=None, collation=None, **kw) Bases: sqlalchemy.dialects.mssql.base._StringType, sqlalchemy.types.TEXT MSSQL TEXT type, for variable-length text up to 2^31 characters. __init__(length=None, collation=None, **kw) Construct a TEXT. Parameters collation Optional, a column-level collation for this string value. Accepts a Windows Collation Name or a SQL Collation Name. class sqlalchemy.dialects.mssql.TIME(precision=None, **kwargs) Bases: sqlalchemy.types.TIME class sqlalchemy.dialects.mssql.TINYINT(*args, **kwargs) Bases: sqlalchemy.types.Integer __init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.mssql.UNIQUEIDENTIFIER(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine __init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.mssql.VARCHAR(length=None, collation=None, **kw) Bases: sqlalchemy.dialects.mssql.base._StringType, sqlalchemy.types.VARCHAR MSSQL VARCHAR type, for variable-length non-Unicode data with a maximum of 8,000 characters. __init__(length=None, collation=None, **kw) Construct a VARCHAR. Parameters
449
length Optinal, maximum data length, in characters. convert_unicode defaults to False. If True, convert unicode data sent to the database to a str bytestring, and convert bytestrings coming back from the database into unicode. Bytestrings are encoded using the dialects encoding, which defaults to utf-8. If False, may be overridden by sqlalchemy.engine.base.Dialect.convert_unicode. collation Optional, a column-level collation for this string value. Accepts a Windows Collation Name or a SQL Collation Name.
4.6.13 PyODBC
Support for MS-SQL via pyodbc. pyodbc is available at: http://pypi.python.org/pypi/pyodbc/ Connecting Examples of pyodbc connection string URLs: mssql+pyodbc://mydsn - connects using the specied DSN named mydsn. The connection string that is created will appear like: dsn=mydsn;Trusted_Connection=Yes mssql+pyodbc://user:pass@mydsn - connects using the DSN named mydsn passing in the UID and PWD information. The connection string that is created will appear like: dsn=mydsn;UID=user;PWD=pass mssql+pyodbc://user:pass@mydsn/?LANGUAGE=us_english - connects using the DSN named mydsn passing in the UID and PWD information, plus the additional connection conguration option LANGUAGE. The connection string that is created will appear like: dsn=mydsn;UID=user;PWD=pass;LANGUAGE=us_english mssql+pyodbc://user:pass@host/db - connects using a connection string dynamically created that would appear like: DRIVER={SQL Server};Server=host;Database=db;UID=user;PWD=pass mssql+pyodbc://user:pass@host:123/db - connects using a connection string that is dynamically created, which also includes the port information using the comma syntax. If your connection string requires the port information to be passed as a port keyword see the next example. This will create the following connection string: DRIVER={SQL Server};Server=host,123;Database=db;UID=user;PWD=pass mssql+pyodbc://user:pass@host/db?port=123 - connects using a connection string that is dynamically created that includes the port information as a separate port keyword. This will create the following connection string: DRIVER={SQL Server};Server=host;Database=db;UID=user;PWD=pass;port=123 If you require a connection string that is outside the options presented above, use the odbc_connect keyword to pass in a urlencoded connection string. What gets passed in will be urldecoded and passed directly.
450
Chapter 4. Dialects
For example: mssql+pyodbc:///?odbc_connect=dsn%3Dmydsn%3BDatabase%3Ddb would create the following connection string: dsn=mydsn;Database=db Encoding your connection string can be easily accomplished through the python shell. For example: >>> import urllib >>> urllib.quote_plus(dsn=mydsn;Database=db) dsn%3Dmydsn%3BDatabase%3Ddb
4.6.14 mxODBC
Support for MS-SQL via mxODBC. mxODBC is available at: http://www.egenix.com/ This was tested with mxODBC 3.1.2 and the SQL Server Native Client connected to MSSQL 2005 and 2008 Express Editions. Connecting Connection is via DSN: mssql+mxodbc://<username>:<password>@<dsnname> Execution Modes mxODBC features two styles of statement execution, using the cursor.execute() and cursor.executedirect() methods (the second being an extension to the DBAPI specication). The former makes use of a particular API call specic to the SQL Server Native Client ODBC driver known SQLDescribeParam, while the latter does not. mxODBC apparently only makes repeated use of a single prepared statement when SQLDescribeParam is used. The advantage to prepared statement reuse is one of performance. The disadvantage is that SQLDescribeParam has a limited set of scenarios in which bind parameters are understood, including that they cannot be placed within the argument lists of function calls, anywhere outside the FROM, or even within subqueries within the FROM clause making the usage of bind parameters within SELECT statements impossible for all but the most simplistic statements. For this reason, the mxODBC dialect uses the native mode by default only for INSERT, UPDATE, and DELETE statements, and uses the escaped string mode for all other statements. This behavior can be controlled via execution_options() using the native_odbc_execute ag with a value of True or False, where a value of True will unconditionally use native bind parameters and a value of False will uncondtionally use string-escaped parameters.
4.6.15 pymssql
Support for the pymssql dialect. This dialect supports pymssql 1.0 and greater. pymssql is available at: 4.6. Microsoft SQL Server 451
http://pymssql.sourceforge.net/ Connecting Sample connect string: mssql+pymssql://<username>:<password>@<freetds_name> Adding ?charset=utf8 or similar will cause pymssql to return strings as Python unicode objects. This can potentially improve performance in some scenarios as decoding of strings is handled natively. Limitations pymssql inherits a lot of limitations from FreeTDS, including: no support for multibyte schema identiers poor support for large decimals poor support for binary elds poor support for VARCHAR/CHAR elds over 255 characters Please consult the pymssql documentation for further information.
4.6.17 AdoDBAPI
The adodbapi dialect is not implemented for 0.6 at this time.
4.7 MySQL
Support for the MySQL database.
452
Chapter 4. Dialects
See the ofcial MySQL documentation for detailed information about features supported in any given server release.
4.7.2 Connecting
See the API documentation on individual drivers for details on connecting.
of tables in foreign key declarations are always received from the database as all-lower case, making it impossible to accurately reect a schema where inter-related tables use mixed-case identier names. Therefore it is strongly advised that table names be declared as all lower case both within SQLAlchemy as well as on the MySQL database itself, especially if database reection features are to be used.
4.7.6 Keys
Not all MySQL storage engines support foreign keys. For MyISAM and similar engines, the information loaded by table reection will not include foreign keys. For these tables, you may supply a ForeignKeyConstraint at reection time: Table(mytable, metadata, ForeignKeyConstraint([other_id], [othertable.other_id]), autoload=True ) When creating tables, SQLAlchemy will automatically set AUTO_INCREMENT on an integer primary key column: >>> t = Table(mytable, metadata, ... Column(mytable_id, Integer, primary_key=True) ... ) >>> t.create() CREATE TABLE mytable ( id INTEGER NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ) You can disable this behavior by supplying autoincrement=False to the Column. This ag can also be used to enable auto-increment on a secondary column in a multi-column key for some storage engines: Table(mytable, metadata, Column(gid, Integer, primary_key=True, autoincrement=False), Column(id, Integer, primary_key=True) )
454
Chapter 4. Dialects
4.7. MySQL
455
Prex lengths are given in characters for nonbinary string types and in bytes for binary string types. The value passed to the keyword argument will be simply passed through to the underlying CREATE INDEX command, so it must be an integer. MySQL only allows a length for an index if it is for a CHAR, VARCHAR, TEXT, BINARY, VARBINARY and BLOB. More information can be found at: http://dev.mysql.com/doc/refman/5.0/en/create-index.html
456
Chapter 4. Dialects
__init__(length=None) Construct a LargeBinary type. Parameters length optional, a length for the column for use in DDL statements, for those BLOB types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY type - use the BINARY/VARBINARY types specically for those. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. class sqlalchemy.dialects.mysql.BOOLEAN(create_constraint=True, name=None) Bases: sqlalchemy.types.Boolean The SQL BOOLEAN type. __init__(create_constraint=True, name=None) Construct a Boolean. Parameters create_constraint defaults to True. If the boolean is generated as an int/smallint, also create a CHECK constraint on the table that ensures 1 or 0 as a value. name if a CHECK constraint is generated, specify the name of the constraint. class sqlalchemy.dialects.mysql.CHAR(length=None, **kwargs) Bases: sqlalchemy.dialects.mysql.base._StringType, sqlalchemy.types.CHAR MySQL CHAR type, for xed-length character data. __init__(length=None, **kwargs) Construct a CHAR. Parameters length Maximum data length, in characters. binary Optional, use the default binary collation for the national character set. This does not affect the type of data stored, use a BINARY type for binary data. collation Optional, request a particular collation. Must be compatible with the national character set. class sqlalchemy.dialects.mysql.DATE(*args, **kwargs) Bases: sqlalchemy.types.Date The SQL DATE type. __init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.mysql.DATETIME(timezone=False) Bases: sqlalchemy.types.DateTime The SQL DATETIME type. class sqlalchemy.dialects.mysql.DECIMAL(precision=None, scale=None, asdecimal=True, **kw) Bases: sqlalchemy.dialects.mysql.base._NumericType, sqlalchemy.types.DECIMAL MySQL DECIMAL type. __init__(precision=None, scale=None, asdecimal=True, **kw) Construct a DECIMAL. Parameters 4.7. MySQL 457
precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.DOUBLE(precision=None, scale=None, asdecimal=True, **kw) Bases: sqlalchemy.dialects.mysql.base._FloatType MySQL DOUBLE type. __init__(precision=None, scale=None, asdecimal=True, **kw) Construct a DOUBLE. Parameters precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.ENUM(*enums, **kw) Bases: sqlalchemy.types.Enum, sqlalchemy.dialects.mysql.base._StringType MySQL ENUM type. __init__(*enums, **kw) Construct an ENUM. Example: Column(myenum, MSEnum(foo, bar, baz)) Parameters enums The range of valid values for this ENUM. Values will be quoted when generating the schema according to the quoting ag (see below). strict Defaults to False: ensure that a given value is in this ENUMs range of permissible values when inserting or updating rows. Note that MySQL will not raise a fatal error if you attempt to store an out of range value- an alternate value will be stored instead. (See MySQL ENUM documentation.) charset Optional, a column-level character set for this string value. Takes precedence to ascii or unicode short-hand. collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. ascii Defaults to False: short-hand for the latin1 character set, generates ASCII in schema. unicode Defaults to False: short-hand for the ucs2 character set, generates UNICODE in schema.
458
Chapter 4. Dialects
binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data. quoting Defaults to auto: automatically determine enum value quoting. If all enum values are surrounded by the same quoting character, then use quoted mode. Otherwise, use unquoted mode. quoted: values in enums are already quoted, they will be used directly when generating the schema - this usage is deprecated. unquoted: values in enums are not quoted, they will be escaped and surrounded by single quotes when generating the schema. Previous versions of this type always required manually quoted values to be supplied; future versions will always quote the string literals for you. This is a transitional option. class sqlalchemy.dialects.mysql.FLOAT(precision=None, scale=None, asdecimal=False, **kw) Bases: sqlalchemy.dialects.mysql.base._FloatType, sqlalchemy.types.FLOAT MySQL FLOAT type. __init__(precision=None, scale=None, asdecimal=False, **kw) Construct a FLOAT. Parameters precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.INTEGER(display_width=None, **kw) Bases: sqlalchemy.dialects.mysql.base._IntegerType, sqlalchemy.types.INTEGER MySQL INTEGER type. __init__(display_width=None, **kw) Construct an INTEGER. Parameters display_width Optional, maximum display width for this number. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.LONGBLOB(length=None) Bases: sqlalchemy.types._Binary MySQL LONGBLOB type, for binary data up to 2^32 bytes. class sqlalchemy.dialects.mysql.LONGTEXT(**kwargs) Bases: sqlalchemy.dialects.mysql.base._StringType
4.7. MySQL
459
MySQL LONGTEXT type, for text up to 2^32 characters. __init__(**kwargs) Construct a LONGTEXT. Parameters charset Optional, a column-level character set for this string value. Takes precedence to ascii or unicode short-hand. collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. ascii Defaults to False: short-hand for the latin1 character set, generates ASCII in schema. unicode Defaults to False: short-hand for the ucs2 character set, generates UNICODE in schema. national Optional. If true, use the servers congured national character set. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data. class sqlalchemy.dialects.mysql.MEDIUMBLOB(length=None) Bases: sqlalchemy.types._Binary MySQL MEDIUMBLOB type, for binary data up to 2^24 bytes. class sqlalchemy.dialects.mysql.MEDIUMINT(display_width=None, **kw) Bases: sqlalchemy.dialects.mysql.base._IntegerType MySQL MEDIUMINTEGER type. __init__(display_width=None, **kw) Construct a MEDIUMINTEGER Parameters display_width Optional, maximum display width for this number. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.MEDIUMTEXT(**kwargs) Bases: sqlalchemy.dialects.mysql.base._StringType MySQL MEDIUMTEXT type, for text up to 2^24 characters. __init__(**kwargs) Construct a MEDIUMTEXT. Parameters charset Optional, a column-level character set for this string value. Takes precedence to ascii or unicode short-hand. collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. ascii Defaults to False: short-hand for the latin1 character set, generates ASCII in schema.
460
Chapter 4. Dialects
unicode Defaults to False: short-hand for the ucs2 character set, generates UNICODE in schema. national Optional. If true, use the servers congured national character set. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data. class sqlalchemy.dialects.mysql.NCHAR(length=None, **kwargs) Bases: sqlalchemy.dialects.mysql.base._StringType, sqlalchemy.types.NCHAR MySQL NCHAR type. For xed-length character data in the servers congured national character set. __init__(length=None, **kwargs) Construct an NCHAR. Parameters length Maximum data length, in characters. binary Optional, use the default binary collation for the national character set. This does not affect the type of data stored, use a BINARY type for binary data. collation Optional, request a particular collation. Must be compatible with the national character set. class sqlalchemy.dialects.mysql.NUMERIC(precision=None, scale=None, asdecimal=True, **kw) Bases: sqlalchemy.dialects.mysql.base._NumericType, sqlalchemy.types.NUMERIC MySQL NUMERIC type. __init__(precision=None, scale=None, asdecimal=True, **kw) Construct a NUMERIC. Parameters precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.NVARCHAR(length=None, **kwargs) Bases: sqlalchemy.dialects.mysql.base._StringType, sqlalchemy.types.NVARCHAR MySQL NVARCHAR type. For variable-length character data in the servers congured national character set. __init__(length=None, **kwargs) Construct an NVARCHAR. Parameters length Maximum data length, in characters. binary Optional, use the default binary collation for the national character set. This does not affect the type of data stored, use a BINARY type for binary data. 4.7. MySQL 461
collation Optional, request a particular collation. Must be compatible with the national character set. class sqlalchemy.dialects.mysql.REAL(precision=None, scale=None, asdecimal=True, **kw) Bases: sqlalchemy.dialects.mysql.base._FloatType, sqlalchemy.types.REAL MySQL REAL type. __init__(precision=None, scale=None, asdecimal=True, **kw) Construct a REAL. Parameters precision Total digits in this number. If scale and precision are both None, values are stored to limits allowed by the server. scale The number of digits after the decimal point. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.SET(*values, **kw) Bases: sqlalchemy.dialects.mysql.base._StringType MySQL SET type. __init__(*values, **kw) Construct a SET. Example: Column(myset, MSSet("foo", "bar", "baz")) Parameters values The range of valid values for this SET. Values will be used exactly as they appear when generating schemas. Strings must be quoted, as in the example above. Single-quotes are suggested for ANSI compatibility and are required for portability to servers with ANSI_QUOTES enabled. charset Optional, a column-level character set for this string value. Takes precedence to ascii or unicode short-hand. collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. ascii Defaults to False: short-hand for the latin1 character set, generates ASCII in schema. unicode Defaults to False: short-hand for the ucs2 character set, generates UNICODE in schema. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data. class sqlalchemy.dialects.mysql.SMALLINT(display_width=None, **kw) Bases: sqlalchemy.dialects.mysql.base._IntegerType, sqlalchemy.types.SMALLINT MySQL SMALLINTEGER type.
462
Chapter 4. Dialects
__init__(display_width=None, **kw) Construct a SMALLINTEGER. Parameters display_width Optional, maximum display width for this number. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.TEXT(length=None, **kw) Bases: sqlalchemy.dialects.mysql.base._StringType, sqlalchemy.types.TEXT MySQL TEXT type, for text up to 2^16 characters. __init__(length=None, **kw) Construct a TEXT. Parameters length Optional, if provided the server may optimize storage by substituting the smallest TEXT type sufcient to store length characters. charset Optional, a column-level character set for this string value. Takes precedence to ascii or unicode short-hand. collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. ascii Defaults to False: short-hand for the latin1 character set, generates ASCII in schema. unicode Defaults to False: short-hand for the ucs2 character set, generates UNICODE in schema. national Optional. If true, use the servers congured national character set. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data. class sqlalchemy.dialects.mysql.TIME(timezone=False) Bases: sqlalchemy.types.Time The SQL TIME type. class sqlalchemy.dialects.mysql.TIMESTAMP(timezone=False) Bases: sqlalchemy.types.TIMESTAMP MySQL TIMESTAMP type. class sqlalchemy.dialects.mysql.TINYBLOB(length=None) Bases: sqlalchemy.types._Binary MySQL TINYBLOB type, for binary data up to 2^8 bytes. class sqlalchemy.dialects.mysql.TINYINT(display_width=None, **kw) Bases: sqlalchemy.dialects.mysql.base._IntegerType MySQL TINYINT type.
4.7. MySQL
463
__init__(display_width=None, **kw) Construct a TINYINT. Note: following the usual MySQL conventions, TINYINT(1) columns reected during Table(..., autoload=True) are treated as Boolean columns. Parameters display_width Optional, maximum display width for this number. unsigned a boolean, optional. zeroll Optional. If true, values will be stored as strings left-padded with zeros. Note that this does not effect the values returned by the underlying database API, which continue to be numeric. class sqlalchemy.dialects.mysql.TINYTEXT(**kwargs) Bases: sqlalchemy.dialects.mysql.base._StringType MySQL TINYTEXT type, for text up to 2^8 characters. __init__(**kwargs) Construct a TINYTEXT. Parameters charset Optional, a column-level character set for this string value. Takes precedence to ascii or unicode short-hand. collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand. ascii Defaults to False: short-hand for the latin1 character set, generates ASCII in schema. unicode Defaults to False: short-hand for the ucs2 character set, generates UNICODE in schema. national Optional. If true, use the servers congured national character set. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data. class sqlalchemy.dialects.mysql.VARBINARY(length=None) Bases: sqlalchemy.types._Binary The SQL VARBINARY type. class sqlalchemy.dialects.mysql.VARCHAR(length=None, **kwargs) Bases: sqlalchemy.dialects.mysql.base._StringType, sqlalchemy.types.VARCHAR MySQL VARCHAR type, for variable-length character data. __init__(length=None, **kwargs) Construct a VARCHAR. Parameters charset Optional, a column-level character set for this string value. Takes precedence to ascii or unicode short-hand. collation Optional, a column-level collation for this string value. Takes precedence to binary short-hand.
464
Chapter 4. Dialects
ascii Defaults to False: short-hand for the latin1 character set, generates ASCII in schema. unicode Defaults to False: short-hand for the ucs2 character set, generates UNICODE in schema. national Optional. If true, use the servers congured national character set. binary Defaults to False: short-hand, pick the binary collation type that matches the columns character set. Generates BINARY in schema. This does not affect the type of data stored, only the collation of character data. class sqlalchemy.dialects.mysql.YEAR(display_width=None) Bases: sqlalchemy.types.TypeEngine MySQL YEAR type, for single byte storage of years 1901-2155.
4.7. MySQL
465
466
Chapter 4. Dialects
4.7. MySQL
467
Character Sets SQLAlchemy zxjdbc dialects pass unicode straight through to the zxjdbc/JDBC layer. To allow multiple character sets to be sent from the MySQL Connector/J JDBC driver, by default SQLAlchemy sets its characterEncoding connection property to UTF-8. It may be overriden via a create_engine URL parameter.
4.8 Oracle
Support for the Oracle database. Oracle version 8 through current (11g at the time of this writing) are supported. For information on connecting via specic drivers, see the documentation for that driver.
unless identier names have been truly created as case sensitive (i.e. using quoted names), all lowercase names should be used on the SQLAlchemy side.
4.8.4 Unicode
SQLAlchemy 0.6 uses the native unicode mode provided as of cx_oracle 5. cx_oracle 5.0.2 or greater is recommended for support of NCLOB. If not using cx_oracle 5, the NLS_LANG environment variable needs to be set in order for the oracle client library to use proper encoding, such as AMERICAN_AMERICA.UTF8. Also note that Oracle supports unicode data through the NVARCHAR and NCLOB data types. When using the SQLAlchemy Unicode and UnicodeText types, these DDL types will be used within CREATE TABLE statements. Usage of VARCHAR2 and CLOB with unicode text still requires NLS_LANG to be set.
When using the SQLAlchemy ORM, the ORM has limited ability to manually issue cascading updates - specify ForeignKey objects using the deferrable=True, initially=deferred keyword arguments, and specify passive_updates=False on each relationship().
4.8. Oracle
469
the native unicode mode is disabled when using cx_oracle, i.e. SQLAlchemy encodes all Python unicode objects to string before passing in as bind parameters.
470
Chapter 4. Dialects
class sqlalchemy.dialects.oracle.NCLOB(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False) Bases: sqlalchemy.types.Text __init__(length=None, convert_unicode=False, _warn_on_bytestring=False) Create a string-holding type. Parameters length optional, a length for the column for use in DDL statements. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. Whether the value is interpreted as bytes or characters is database specic. convert_unicode defaults to False. If True, the type will do what is necessary in order to accept Python Unicode objects as bind parameters, and to return Python Unicode objects in result rows. This may require SQLAlchemy to explicitly coerce incoming Python unicodes into an encoding, and from an encoding back to Unicode, or it may not require any interaction from SQLAlchemy at all, depending on the DBAPI in use. When SQLAlchemy performs the encoding/decoding, the encoding used is congured via encoding, which defaults to utf-8. The convert_unicode behavior can also be turned on for all String types by setting sqlalchemy.engine.base.Dialect.convert_unicode on create_engine(). To instruct SQLAlchemy to perform Unicode encoding/decoding even on a platform that already handles Unicode natively, set convert_unicode=force. This will incur signicant performance overhead when fetching unicode result columns. assert_unicode Deprecated. A warning is raised in all cases when a nonUnicode object is passed when SQLAlchemy would coerce into an encoding (note: but not when the DBAPI handles unicode objects natively). To suppress or raise this warning to an error, use the Python warnings lter documented at: http://docs.python.org/library/warnings.html unicode_error Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard librarys string.decode() functions. This ag requires that convert_unicode is set to force - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this ag adds signicant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This ag should only be used as an absolute last resort for reading strings from a column with varied or corrupted encodings, which only applies to databases that accept invalid encodings in the rst place (i.e. MySQL. not PG, Sqlite, etc.) class sqlalchemy.dialects.oracle.NUMBER(precision=None, scale=None, asdecimal=None) Bases: sqlalchemy.types.Numeric, sqlalchemy.types.Integer class sqlalchemy.dialects.oracle.LONG(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False) Bases: sqlalchemy.types.Text assert_unicode=None, unicode_error=None,
4.8. Oracle
471
assert_unicode=None,
unicode_error=None,
length optional, a length for the column for use in DDL statements. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. Whether the value is interpreted as bytes or characters is database specic. convert_unicode defaults to False. If True, the type will do what is necessary in order to accept Python Unicode objects as bind parameters, and to return Python Unicode objects in result rows. This may require SQLAlchemy to explicitly coerce incoming Python unicodes into an encoding, and from an encoding back to Unicode, or it may not require any interaction from SQLAlchemy at all, depending on the DBAPI in use. When SQLAlchemy performs the encoding/decoding, the encoding used is congured via encoding, which defaults to utf-8. The convert_unicode behavior can also be turned on for all String types by setting sqlalchemy.engine.base.Dialect.convert_unicode on create_engine(). To instruct SQLAlchemy to perform Unicode encoding/decoding even on a platform that already handles Unicode natively, set convert_unicode=force. This will incur signicant performance overhead when fetching unicode result columns. assert_unicode Deprecated. A warning is raised in all cases when a nonUnicode object is passed when SQLAlchemy would coerce into an encoding (note: but not when the DBAPI handles unicode objects natively). To suppress or raise this warning to an error, use the Python warnings lter documented at: http://docs.python.org/library/warnings.html unicode_error Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard librarys string.decode() functions. This ag requires that convert_unicode is set to force - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this ag adds signicant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This ag should only be used as an absolute last resort for reading strings from a column with varied or corrupted encodings, which only applies to databases that accept invalid encodings in the rst place (i.e. MySQL. not PG, Sqlite, etc.) class sqlalchemy.dialects.oracle.RAW(length=None) Bases: sqlalchemy.types._Binary
472
Chapter 4. Dialects
Driver The Oracle dialect uses the cx_oracle driver, available at http://cx-oracle.sourceforge.net/ . The dialect has several behaviors which are specically tailored towards compatibility with this module. Version 5.0 or greater is strongly recommended, as SQLAlchemy makes extensive use of the cx_oracle output converters for numeric and string conversions. Connecting
Connecting with create_engine() uses the standard URL approach of oracle://user:pass@host:port/dbname[?key=value If dbname is present, the host, port, and dbname tokens are converted to a TNS name using the cx_oracle makedsn() function. Otherwise, the host token is taken directly as a TNS name. Additional arguments which may be specied either as query string arguments on the URL, or as keyword arguments to create_engine() are: allow_twophase - enable two-phase transactions. Defaults to True. arraysize - set the cx_oracle.arraysize value on cursors, in SQLAlchemy it defaults to 50. See the section on LOB Objects below. auto_convert_lobs - defaults to True, see the section on LOB objects. auto_setinputsizes - the cx_oracle.setinputsizes() call is issued for all bind parameters. This is required for LOB datatypes but can be disabled to reduce overhead. Defaults to True. mode - This is given the string value of SYSDBA or SYSOPER, or alternatively an integer value. This value is only available as a URL query string argument. threaded - enable multithreaded access to cx_oracle connections. Defaults to True. Note that this is the opposite default of cx_oracle itself. Unicode cx_oracle 5 fully supports Python unicode objects. SQLAlchemy will pass all unicode strings directly to cx_oracle, and additionally uses an output handler so that all string based result values are returned as unicode as well. Note that this behavior is disabled when Oracle 8 is detected, as it has been observed that issues remain when passing Python unicodes to cx_oracle with Oracle 8. LOB Objects cx_oracle returns oracle LOBs using the cx_oracle.LOB object. SQLAlchemy converts these to strings so that the interface of the Binary type is consistent with that of other backends, and so that the linkage to a live cursor is not needed in scenarios like result.fetchmany() and result.fetchall(). This means that by default, LOB objects are fully fetched unconditionally by SQLAlchemy, and the linkage to a live cursor is broken. To disable this processing, pass auto_convert_lobs=False to create_engine(). Two Phase Transaction Support Two Phase transactions are implemented using XA transactions. Success has been reported with this feature but it should be regarded as experimental.
4.8. Oracle
473
Precision Numerics The SQLAlchemy dialect goes thorugh a lot of steps to ensure that decimal numbers are sent and received with full accuracy. An outputtypehandler callable is associated with each cx_oracle connection object which detects numeric types and receives them as string values, instead of receiving a Python float directly, which is then passed to the Python Decimal constructor. The Numeric and Float types under the cx_oracle dialect are aware of this behavior, and will coerce the Decimal to float if the asdecimal ag is False (default on Float, optional on Numeric). The handler attempts to use the precision and scale attributes of the result set column to best determine if subsequent incoming values should be received as Decimal as opposed to int (in which case no processing is added). There are several scenarios where OCI does not provide unambiguous data as to the numeric type, including some situations where individual rows may return a combination of oating point and integer values. Certain values for precision and scale have been observed to determine this scenario. When it occurs, the outputtypehandler receives as string and then passes off to a processing function which detects, for each returned value, if a decimal point is present, and if so converts to Decimal, otherwise to int. The intention is that simple int-based statements like SELECT my_seq.nextval() FROM DUAL continue to return ints and not Decimal objects, and that any kind of oating point value is received as a string so that there is no oating point loss of precision. The decimal point is present logic itself is also sensitive to locale. Under OCI, this is controlled by the NLS_LANG environment variable. Upon rst connection, the dialect runs a test to determine the current decimal character, which can be a comma , for european locales. From that point forward the outputtypehandler uses that character to represent a decimal point (this behavior is new in version 0.6.6). Note that cx_oracle 5.0.3 or greater is required when dealing with numerics with locale settings that dont use a period . as the decimal character.
4.9 PostgreSQL
Support for the PostgreSQL database. For information on connecting using specic drivers, see the documentation section regarding that driver.
4.9.1 Sequences/SERIAL
PostgreSQL supports sequences, and SQLAlchemy uses these as the default means of creating new primary key values for integer-based primary key columns. When creating tables, SQLAlchemy will issue the SERIAL datatype for integer-based primary key columns, which generates a sequence and server side default corresponding to the column. To specify a specic named sequence to be used for primary key generation, use the Sequence() construct: Table(sometable, metadata, Column(id, Integer, Sequence(some_id_seq), primary_key=True) )
474
Chapter 4. Dialects
When SQLAlchemy issues a single INSERT statement, to fulll the contract of having the last insert identier available, a RETURNING clause is added to the INSERT statement which species the primary key columns should be returned after the statement completes. The RETURNING functionality only takes place if Postgresql 8.2 or later is in use. As a fallback approach, the sequence, whether specied explicitly or implicitly via SERIAL, is executed independently beforehand, the returned value to be used in the subsequent insert. Note that when an insert() construct is executed using executemany semantics, the last inserted identier functionality does not apply; no RETURNING clause is emitted nor is the sequence pre-executed in this case. To force the usage of RETURNING by default off, specify the ag implicit_returning=False to create_engine().
4.9.4 INSERT/UPDATE...RETURNING
The dialect supports PG 8.2s INSERT..RETURNING, UPDATE..RETURNING and DELETE..RETURNING syntaxes. INSERT..RETURNING is used by default for single-row INSERT statements in order to fetch newly generated primary key identiers. To specify an explicit RETURNING clause, use the _UpdateBase.returning() method on a per-statement basis: # INSERT..RETURNING result = table.insert().returning(table.c.col1, table.c.col2).\ values(name=foo) print result.fetchall() # UPDATE..RETURNING result = table.update().returning(table.c.col1, table.c.col2).\ where(table.c.name==foo).values(name=bar) 4.9. PostgreSQL 475
476
Chapter 4. Dialects
MACADDR, NUMERIC, REAL, SMALLINT, TEXT, TIME, TIMESTAMP, \ UUID, VARCHAR Types which are specic to PostgreSQL, or have PostgreSQL-specic construction arguments, are as follows: class sqlalchemy.dialects.postgresql.ARRAY(item_type, mutable=False, as_tuple=False) Bases: sqlalchemy.types.MutableType, sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine Postgresql ARRAY type. Represents values as Python lists. The ARRAY type may not be supported on all DBAPIs. It is known to work on psycopg2 and not pg8000. __init__(item_type, mutable=False, as_tuple=False) Construct an ARRAY. E.g.: Column(myarray, ARRAY(Integer)) Arguments are: Parameters item_type The data type of items of this array. Note that dimensionality is irrelevant here, so multi-dimensional arrays like INTEGER[][], are constructed as ARRAY(Integer), not as ARRAY(ARRAY(Integer)) or such. The type mapping gures out on the y mutable=False Specify whether lists passed to this class should be considered mutable - this enables mutable types mode in the ORM. Be sure to read the notes for MutableType regarding ORM performance implications (default changed from True in 0.7.0). Note: This functionality is now superseded by sqlalchemy.ext.mutable extension described in Mutation Tracking. the
as_tuple=False Specify whether return results should be converted to tuples from lists. DBAPIs such as psycopg2 return lists by default. When tuples are returned, the results are hashable. This ag can only be set to True when mutable is set to False. (new in 0.6.5) class sqlalchemy.dialects.postgresql.BIT(length=None, varying=False) Bases: sqlalchemy.types.TypeEngine class sqlalchemy.dialects.postgresql.BYTEA(length=None) Bases: sqlalchemy.types.LargeBinary __init__(length=None) Construct a LargeBinary type. Parameters length optional, a length for the column for use in DDL statements, for those BLOB types that accept a length (i.e. MySQL). It does not produce a small BINARY/VARBINARY type - use the BINARY/VARBINARY types specically for those. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued. class sqlalchemy.dialects.postgresql.CIDR(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine
4.9. PostgreSQL
477
__init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION(precision=None, mal=False, **kwargs) Bases: sqlalchemy.types.Float __init__(precision=None, asdecimal=False, **kwargs) Construct a Float. Parameters precision the numeric precision for use in DDL CREATE TABLE. asdecimal the same ag as that of Numeric, but defaults to False. Note that setting this ag to True results in oating point conversion. **kwargs deprecated. Additional arguments here are ignored by the default Float type. For database specic oats that support additional arguments, see that dialects documentation for details, such as sqlalchemy.dialects.mysql.FLOAT. class sqlalchemy.dialects.postgresql.ENUM(*enums, **kw) Bases: sqlalchemy.types.Enum __init__(*enums, **kw) Construct an enum. Keyword arguments which dont apply to a specic backend are ignored by that backend. Parameters *enums string or unicode enumeration labels. If unicode labels are present, the convert_unicode ag is auto-enabled. convert_unicode Enable unicode-aware bind parameter and result-set processing for this Enums data. This is set automatically based on the presence of unicode label strings. metadata Associate this type directly with a MetaData object. For types that exist on the target database as an independent schema construct (Postgresql), this type will be created and dropped within create_all() and drop_all() operations. If the type is not associated with any MetaData object, it will associate itself with each Table in which it is used, and will be created when any of those individual tables are created, after a check is performed for its existence. The type is only dropped when drop_all() is called for that Table objects metadata, however. name The name of this type. This is required for Postgresql and any future supported database which requires an explicitly named type, or an explicitly named constraint in order to generate the type and/or a table that uses it. native_enum Use the databases native ENUM type when available. Defaults to True. When False, uses VARCHAR + check constraint for all backends. schema Schemaname of this type. For types that exist on the target database as an independent schema construct (Postgresql), this parameter species the named schema in which the type is present. quote Force quoting to be on or off on the types name. If left as the default of None, the usual schema-level case sensitive/reserved name rules are used to determine if this types name should be quoted. asdeci-
478
Chapter 4. Dialects
class sqlalchemy.dialects.postgresql.INET(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine __init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.postgresql.INTERVAL(precision=None) Bases: sqlalchemy.types.TypeEngine Postgresql INTERVAL type. The INTERVAL type may not be supported on all DBAPIs. It is known to work on psycopg2 and not pg8000 or zxjdbc. class sqlalchemy.dialects.postgresql.MACADDR(*args, **kwargs) Bases: sqlalchemy.types.TypeEngine __init__(*args, **kwargs) Support implementations that were passing arguments class sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, **kwargs) Bases: sqlalchemy.types.Float The SQL REAL type. __init__(precision=None, asdecimal=False, **kwargs) Construct a Float. Parameters precision the numeric precision for use in DDL CREATE TABLE. asdecimal the same ag as that of Numeric, but defaults to False. Note that setting this ag to True results in oating point conversion. **kwargs deprecated. Additional arguments here are ignored by the default Float type. For database specic oats that support additional arguments, see that dialects documentation for details, such as sqlalchemy.dialects.mysql.FLOAT. class sqlalchemy.dialects.postgresql.UUID(as_uuid=False) Bases: sqlalchemy.types.TypeEngine Postgresql UUID type. Represents the UUID column type, interpreting data either as natively returned by the DBAPI or as Python uuid objects. The UUID type may not be supported on all DBAPIs. It is known to work on psycopg2 and not pg8000. __init__(as_uuid=False) Construct a UUID type. Parameters as_uuid=False if True, values will be interpreted as Python uuid objects, converting to/from string via the DBAPI.
4.9. PostgreSQL
479
Driver The psycopg2 driver is available at http://pypi.python.org/pypi/psycopg2/ . The dialect has several behaviors which are specically tailored towards compatibility with this module. Note that psycopg1 is not supported. Connecting
URLs are of the form postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...] psycopg2-specic keyword arguments which are accepted by create_engine() are: server_side_cursors - Enable the usage of server side cursors for SQL statements which support this feature. What this essentially means from a psycopg2 point of view is that the cursor is created using a name, e.g. connection.cursor(some name), which has the effect that result rows are not immediately pre-fetched and buffered after statement execution, but are instead left on the server and only retrieved as needed. SQLAlchemys ResultProxy uses special row-buffering behavior when this feature is enabled, such that groups of 100 rows at a time are fetched over the wire to reduce conversational overhead. Note that the stream_results=True execution option is a more targeted way of enabling this mode on a per-execution basis. use_native_unicode - Enable the usage of Psycopg2 native unicode mode per connection. True by default. Per-Statement/Connection Execution Options The following DBAPI-specic options are respected when used with Connection.execution_options(), Executable.execution_options(), Query.execution_options(), in addition to those not specic to DBAPIs: isolation_level - Set the transaction isolation level for the lifespan of a Connection (can only be set on a connection, not a statement or query). This includes the options SERIALIZABLE, READ COMMITTED, READ UNCOMMITTED and REPEATABLE READ. stream_results - Enable or disable usage of server side cursors. server_side_cursors option of the Engine is used. Unicode By default, the psycopg2 driver uses the psycopg2.extensions.UNICODE extension, such that the DBAPI receives and returns all strings as Python Unicode objects directly - SQLAlchemy passes these values through without change. Note that this setting requires that the PG client encoding be set to one which can accomodate the kind of character data being passed - typically utf-8. If the Postgresql database is congured for SQL_ASCII encoding, which is often the default for PG installations, it may be necessary for non-ascii strings to be encoded into a specic encoding before being passed to the DBAPI. If changing the databases client encoding setting is not an option, specify use_native_unicode=False as a keyword argument to create_engine(), and take note of the encoding setting as well, which also defaults to utf-8. Note that disabling native unicode mode has a slight performance penalty, as SQLAlchemy now must translate unicode strings to/from an encoding such as utf-8, a task that is handled more efciently within the Psycopg2 driver natively. Transactions The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations. If None or not set, the
480
Chapter 4. Dialects
Client Encoding The psycopg2 dialect accepts a parameter client_encoding via create_engine() which will call the psycopg2 set_client_encoding() method for each new connection: engine = create_engine("postgresql://user:pass@host/dbname", client_encoding=utf8) This overrides the encoding specied in the Postgresql client conguration. See: http://initd.org/psycopg/docs/connection.html#connection.set_client_encoding New in 0.7.3. Transaction Isolation Level The isolation_level parameter of create_engine() here makes use psycopg2s set_isolation_level() connection method, rather than issuing a SET SESSION CHARACTERISTICS command. This because psycopg2 resets the isolation level on each new transaction, and needs to know at the API level what level should be used. NOTICE logging The psycopg2 dialect will log Postgresql NOTICE messages via the sqlalchemy.dialects.postgresql logger: import logging logging.getLogger(sqlalchemy.dialects.postgresql).setLevel(logging.INFO)
4.9. PostgreSQL
481
Interval Passing data from/to the Interval type is not supported as of yet.
4.10 SQLite
Support for the SQLite database. For information on connecting using a specic driver, see the documentation section regarding that driver.
482
Chapter 4. Dialects
4.10. SQLite
483
2011-03-15 The storage format can be customized to some degree using the storage_format and regexp parameters, such as: import re from sqlalchemy.dialects.sqlite import DATE d = DATE( storage_format="%02d /%02d /%02d ", regexp=re.compile("(\d+)/(\d+)/(\d+)") ) Parameters storage_format format string which will be appled to the tuple (value.year, value.month, value.day), given a Python datetime.date() object. regexp regular expression which will be applied to incoming result rows. The resulting match object is appled to the Python date() constructor via *map(int, match_obj.groups(0)). class sqlalchemy.dialects.sqlite.TIME(storage_format=None, regexp=None, **kw) Represent a Python time object in SQLite using a string. The default string storage format is: "%02d :%02d :%02d .%06d " % (value.hour, value.minute, value.second, value.microsecond) e.g.: 12:05:57.10558 The storage format can be customized to some degree using the storage_format and regexp parameters, such as: import re from sqlalchemy.dialects.sqlite import TIME t = TIME( storage_format="%02d -%02d -%02d -%06d ", regexp=re.compile("(\d+)-(\d+)-(\d+)-(?:-(\d+))?") ) Parameters storage_format format string which will be appled to the tuple (value.hour, value.minute, value.second, value.microsecond), given a Python datetime.time() object. regexp regular expression which will be applied to incoming result rows. The resulting match object is appled to the Python time() constructor via *map(int, match_obj.groups(0)).
4.10.5 Pysqlite
Support for the SQLite database via pysqlite. Note that pysqlite is the same driver as the sqlite3 module included with the Python distribution.
484
Chapter 4. Dialects
Driver When using Python 2.5 and above, the built in sqlite3 driver is already installed and no additional installation is needed. Otherwise, the pysqlite2 driver needs to be present. This is the same driver as sqlite3, just with a different name. The pysqlite2 driver will be loaded rst, and if not found, sqlite3 is loaded. This allows an explicitly installed pysqlite driver to take precedence over the built in one. As with all dialects, a specic DBAPI module may be provided to create_engine() to control this explicitly: from sqlite3 import dbapi2 as sqlite e = create_engine(sqlite+pysqlite:///file.db, module=sqlite) Full documentation on pysqlite is available at: http://www.initd.org/pub/software/pysqlite/doc/usage-guide.html Connect Strings The le specication for the SQLite database is taken as the database portion of the URL. Note that the format of a url is: driver://user:pass@host/database This means that the actual lename to be used starts with the characters to the right of the third slash. So connecting to a relative lepath looks like: # relative path e = create_engine(sqlite:///path/to/database.db) An absolute path, which is denoted by starting with a slash, means you need four slashes: # absolute path e = create_engine(sqlite:////path/to/database.db) To use a Windows path, regular drive specications and backslashes can be used. Double backslashes are probably needed: # absolute path on Windows e = create_engine(sqlite:///C:\\path\\to\\database.db) The sqlite :memory: identier is the default if no lepath is present. Specify sqlite:// and nothing else: # in-memory database e = create_engine(sqlite://) Compatibility with sqlite3 native date and datetime types The pysqlite driver includes the sqlite3.PARSE_DECLTYPES and sqlite3.PARSE_COLNAMES options, which have the effect of any column or expression explicitly cast as date or timestamp will be converted to a Python date or datetime object. The date and datetime types provided with the pysqlite dialect are not currently compatible with these options, since they render the ISO date/datetime including microseconds, which pysqlites driver does not. Additionally, SQLAlchemy does not at this time automatically render the cast syntax required for the freestanding functions current_timestamp and current_date to return datetime/date types natively. Unfortunately, pysqlite does not provide the standard DBAPI types in cursor.description, leaving SQLAlchemy with no way to detect these types on the y without expensive per-row type checks. Keeping in mind that pysqlites parsing option is not recommended, nor should be necessary, for use with SQLAlchemy, usage of PARSE_DECLTYPES can be forced if one congures native_datetime=True on create_engine():
4.10. SQLite
485
engine = create_engine(sqlite://, connect_args={detect_types: sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAME native_datetime=True ) With this ag enabled, the DATE and TIMESTAMP types (but note - not the DATETIME or TIME types...confused yet ?) will not perform any bind parameter or result processing. Execution of func.current_date() will return a string. func.current_timestamp() is registered as returning a DATETIME type in SQLAlchemy, so this function still receives SQLAlchemy-level result processing. Threading/Pooling Behavior Pysqlites default behavior is to prohibit the usage of a single connection in more than one thread. This is controlled by the check_same_thread Pysqlite ag. This default is intended to work with older versions of SQLite that did not support multithreaded operation under various circumstances. In particular, older SQLite versions did not allow a :memory: database to be used in multiple threads under any circumstances. SQLAlchemy sets up pooling to work with Pysqlites default behavior: When a :memory: SQLite database is specied, the dialect by default will use SingletonThreadPool. This pool maintains a single connection per thread, so that all access to the engine within the current thread use the same :memory: database - other threads would access a different :memory: database. When a le-based database is specied, the dialect will use NullPool as the source of connections. This pool closes and discards connections which are returned to the pool immediately. SQLite le-based connections have extremely low overhead, so pooling is not necessary. The scheme also prevents a connection from being used again in a different thread and works best with SQLites coarse-grained le locking. Note: The default selection of NullPool for SQLite le-based databases is new in SQLAlchemy 0.7. Previous versions select SingletonThreadPool by default for all SQLite databases. Modern versions of SQLite no longer have the threading restrictions, and assuming the sqlite3/pysqlite library was built with SQLites default threading mode of Serialized, even :memory: databases can be shared among threads.
486
Chapter 4. Dialects
# maintain the same connection per thread from sqlalchemy.pool import SingletonThreadPool engine = create_engine(sqlite:///mydb.db, poolclass=SingletonThreadPool)
# maintain the same connection across all threads from sqlalchemy.pool import StaticPool engine = create_engine(sqlite:///mydb.db, poolclass=StaticPool) Note that SingletonThreadPool should be congured for the number of threads that are to be used; beyond that number, connections will be closed out in a non deterministic way. Unicode In contrast to SQLAlchemys active handling of date and time types for pysqlite, pysqlites default behavior regarding Unicode is that all strings are returned as Python unicode objects in all cases. So even if the Unicode type is not used, you will still always receive unicode data back from a result set. It is strongly recommended that you do use the Unicode type to represent strings, since it will raise a warning if a non-unicode Python string is passed from the user application. Mixing the usage of non-unicode objects with returned unicode objects can quickly create confusion, particularly when using the ORM as internal data is not always represented by an actual database result string.
4.11 Sybase
Support for Sybase Adaptive Server Enterprise (ASE). Note that this dialect is no longer specic to Sybase iAnywhere. ASE is the primary support platform.
4.11. Sybase
487
Unicode Support The pyodbc driver currently supports usage of these Sybase types with Unicode or multibyte strings: CHAR NCHAR NVARCHAR TEXT VARCHAR Currently not supported are: UNICHAR UNITEXT UNIVARCHAR
488
Chapter 4. Dialects
CHAPTER
FIVE
489
490
sqlalchemy.dialects.informix.informixdb, 442 adjacency_list, 234 sqlalchemy.dialects.maxdb.base , 442 association, 234 sqlalchemy.dialects.mssql.adodbapi, 452 sqlalchemy.dialects.mssql.base, 444 b sqlalchemy.dialects.mssql.mxodbc, 451 beaker_caching, 235 sqlalchemy.dialects.mssql.pymssql, 451 sqlalchemy.dialects.mssql.pyodbc, 450 c sqlalchemy.dialects.mssql.zxjdbc, 452 custom_attributes, 234 sqlalchemy.dialects.mysql.base, 452 sqlalchemy.dialects.mysql.mysqlconnector, d 467 dynamic_dict, 236 sqlalchemy.dialects.mysql.mysqldb, 465 sqlalchemy.dialects.mysql.oursql, 466 e sqlalchemy.dialects.mysql.pymysql, 466 elementtree, 239 sqlalchemy.dialects.mysql.pyodbc, 467 sqlalchemy.dialects.mysql.zxjdbc, 467 g sqlalchemy.dialects.oracle.base, 468 generic_associations, 236 sqlalchemy.dialects.oracle.cx_oracle, graphs, 236 472 sqlalchemy.dialects.oracle.zxjdbc, 474 i sqlalchemy.dialects.postgresql.base, 474 inheritance, 237 sqlalchemy.dialects.postgresql.pg8000, 481 l sqlalchemy.dialects.postgresql.psycopg2, large_collection, 237 479 sqlalchemy.dialects.postgresql.pypostgresql, n 481 nested_sets, 237 sqlalchemy.dialects.postgresql.zxjdbc, 482 p sqlalchemy.dialects.sqlite, 483 postgis, 237 sqlalchemy.dialects.sqlite.base, 482 sqlalchemy.dialects.sqlite.pysqlite, 484 s sqlalchemy.dialects.sybase.base, 487 sharding, 236 sqlalchemy.dialects.sybase.mxodbc, 488 sqlalchemy.dialects.access.base, 444 sqlalchemy.dialects.sybase.pyodbc, 487 sqlalchemy.dialects.drizzle.base, 435 sqlalchemy.dialects.sybase.pysybase, 487 sqlalchemy.dialects.drizzle.mysqldb, 440 sqlalchemy.engine.base, 319 sqlalchemy.dialects.firebird.base, 440 sqlalchemy.exc, 420 sqlalchemy.dialects.firebird.kinterbasdbsqlalchemy.ext.associationproxy , , 183 441 sqlalchemy.ext.compiler, 411 sqlalchemy.dialects.informix.base, 442 491
sqlalchemy.ext.declarative, 193 sqlalchemy.ext.horizontal_shard, 218 sqlalchemy.ext.hybrid, 219 sqlalchemy.ext.mutable, 210 sqlalchemy.ext.orderinglist, 216 sqlalchemy.ext.serializer, 417 sqlalchemy.ext.sqlsoup, 226 sqlalchemy.interfaces, 418 sqlalchemy.orm, 67 sqlalchemy.orm.exc, 246 sqlalchemy.orm.interfaces, 240 sqlalchemy.orm.session, 111 sqlalchemy.pool, 335 sqlalchemy.schema, 343 sqlalchemy.sql.expression, 276 sqlalchemy.sql.functions, 310 sqlalchemy.types, 382
v
versioning, 238 vertical, 239
492
INDEX
Symbols
method), 439 __init__() (sqlalchemy.dialects.drizzle.REAL method), _BindParamClause (class in sqlalchemy.sql.expression), 439 289 _CompareMixin (class in sqlalchemy.sql.expression), 293 __init__() (sqlalchemy.dialects.drizzle.TEXT method), 439 _SelectBase (class in sqlalchemy.sql.expression), 307 __init__() (sqlalchemy.dialects.drizzle.VARCHAR __add__() (sqlalchemy.sql.expression.ColumnOperators method), 440 method), 295 __init__() (sqlalchemy.dialects.mssql.BIT method), 447 __and__() (sqlalchemy.sql.expression.Operators __init__() (sqlalchemy.dialects.mssql.CHAR method), method), 303 447 __div__() (sqlalchemy.sql.expression.ColumnOperators __init__() (sqlalchemy.dialects.mssql.IMAGE method), method), 295 448 __eq__() (sqlalchemy.orm.properties.RelationshipProperty.Comparator __init__() (sqlalchemy.dialects.mssql.MONEY method), method), 252 448 __eq__() (sqlalchemy.sql.expression.ColumnOperators __init__() (sqlalchemy.dialects.mssql.NCHAR method), method), 294 448 __eq__() (sqlalchemy.sql.operators.ColumnOperators __init__() (sqlalchemy.dialects.mssql.NTEXT method), method), 296 448 __ge__() (sqlalchemy.sql.expression.ColumnOperators __init__() (sqlalchemy.dialects.mssql.NVARCHAR method), 295 method), 449 __gt__() (sqlalchemy.sql.expression.ColumnOperators __init__() (sqlalchemy.dialects.mssql.SMALLMONEY method), 294 method), 449 __init__ (sqlalchemy.sql.operators.ColumnOperators at__init__() (sqlalchemy.dialects.mssql.SQL_VARIANT tribute), 296 method), 449 __init__ (sqlalchemy.types.AbstractType attribute), 400 __init__() (sqlalchemy.dialects.mssql.TEXT method), __init__ (sqlalchemy.types.Concatenable attribute), 403 449 __init__ (sqlalchemy.types.MutableType attribute), 403 __init__() (sqlalchemy.dialects.drizzle.BIGINT method), __init__() (sqlalchemy.dialects.mssql.TINYINT method), 449 437 __init__() (sqlalchemy.dialects.mssql.UNIQUEIDENTIFIER __init__() (sqlalchemy.dialects.drizzle.CHAR method), method), 449 437 __init__() (sqlalchemy.dialects.mssql.VARCHAR __init__() (sqlalchemy.dialects.drizzle.DECIMAL method), 449 method), 437 __init__() (sqlalchemy.dialects.mysql.BIGINT method), __init__() (sqlalchemy.dialects.drizzle.DOUBLE 456 method), 438 __init__() (sqlalchemy.dialects.drizzle.ENUM method), __init__() (sqlalchemy.dialects.mysql.BIT method), 456 __init__() (sqlalchemy.dialects.mysql.BLOB method), 438 456 __init__() (sqlalchemy.dialects.drizzle.FLOAT method), __init__() (sqlalchemy.dialects.mysql.BOOLEAN 438 method), 457 __init__() (sqlalchemy.dialects.drizzle.INTEGER __init__() (sqlalchemy.dialects.mysql.CHAR method), method), 439 457 __init__() (sqlalchemy.dialects.drizzle.NUMERIC __init__() (sqlalchemy.dialects.mysql.DATE method), 493
__init__() (sqlalchemy.dialects.postgresql.INET method), (sqlalchemy.dialects.mysql.DECIMAL 479 method), 457 __init__() (sqlalchemy.dialects.postgresql.MACADDR __init__() (sqlalchemy.dialects.mysql.DOUBLE method), 479 method), 458 __init__() (sqlalchemy.dialects.postgresql.REAL __init__() (sqlalchemy.dialects.mysql.ENUM method), method), 479 458 __init__() (sqlalchemy.dialects.postgresql.UUID __init__() (sqlalchemy.dialects.mysql.FLOAT method), method), 479 459 __init__() (sqlalchemy.engine.base.Compiled method), __init__() (sqlalchemy.dialects.mysql.INTEGER 424 method), 459 __init__() (sqlalchemy.engine.base.Connection method), __init__() (sqlalchemy.dialects.mysql.LONGTEXT 324 method), 460 __init__() (sqlalchemy.engine.reection.Inspector __init__() (sqlalchemy.dialects.mysql.MEDIUMINT method), 358 method), 460 __init__() (sqlalchemy.ext.associationproxy.AssociationProxy __init__() (sqlalchemy.dialects.mysql.MEDIUMTEXT method), 191 method), 460 __init__() (sqlalchemy.ext.horizontal_shard.ShardedSession __init__() (sqlalchemy.dialects.mysql.NCHAR method), method), 218 461 __init__() (sqlalchemy.ext.hybrid.hybrid_method __init__() (sqlalchemy.dialects.mysql.NUMERIC method), 225 method), 461 __init__() (sqlalchemy.ext.hybrid.hybrid_property __init__() (sqlalchemy.dialects.mysql.NVARCHAR method), 225 method), 461 __init__() (sqlalchemy.ext.sqlsoup.SqlSoup method), 231 __init__() (sqlalchemy.dialects.mysql.REAL method), __init__() (sqlalchemy.orm.collections.MappedCollection 462 method), 101 __init__() (sqlalchemy.dialects.mysql.SET method), 462 __init__() (sqlalchemy.orm.mapper.Mapper method), 63 __init__() (sqlalchemy.dialects.mysql.SMALLINT __init__() (sqlalchemy.orm.properties.ColumnProperty method), 462 method), 247 __init__() (sqlalchemy.dialects.mysql.TEXT method), __init__() (sqlalchemy.orm.properties.RelationshipProperty.Comparator 463 method), 252 __init__() (sqlalchemy.dialects.mysql.TINYINT __init__() (sqlalchemy.orm.session.Session method), 133 method), 463 __init__() (sqlalchemy.pool.Pool method), 339 __init__() (sqlalchemy.dialects.mysql.TINYTEXT __init__() (sqlalchemy.pool.QueuePool method), 340 method), 464 __init__() (sqlalchemy.pool.SingletonThreadPool __init__() (sqlalchemy.dialects.mysql.VARCHAR method), 341 method), 464 __init__() (sqlalchemy.schema.Column method), 348 __init__() (sqlalchemy.dialects.oracle.BFILE method), __init__() (sqlalchemy.schema.DDL method), 380 470 __init__() (sqlalchemy.schema.ForeignKey method), 370 __init__() (sqlalchemy.dialects.oracle.INTERVAL __init__() (sqlalchemy.schema.ForeignKeyConstraint method), 470 method), 372 __init__() (sqlalchemy.dialects.oracle.LONG method), __init__() (sqlalchemy.schema.Index method), 374 471 __init__() (sqlalchemy.schema.MetaData method), 351 __init__() (sqlalchemy.dialects.oracle.NCLOB method), __init__() (sqlalchemy.schema.Sequence method), 365 471 __init__() (sqlalchemy.schema.Table method), 354 __init__() (sqlalchemy.dialects.postgresql.ARRAY __init__() (sqlalchemy.schema.ThreadLocalMetaData method), 477 method), 356 __init__() (sqlalchemy.dialects.postgresql.BYTEA __init__() (sqlalchemy.sql.compiler.IdentierPreparer method), 477 method), 432 __init__() (sqlalchemy.dialects.postgresql.CIDR __init__() (sqlalchemy.sql.compiler.SQLCompiler method), 477 method), 432 __init__() (sqlalchemy.dialects.postgresql.DOUBLE_PRECISION __init__() (sqlalchemy.sql.expression.Function method), method), 478 300 __init__() (sqlalchemy.dialects.postgresql.ENUM __init__() (sqlalchemy.sql.expression.FunctionElement method), 478 method), 299 __init__()
457
494
Index
__init__() (sqlalchemy.sql.expression.Join method), 302 __rtruediv__() (sqlalchemy.sql.expression.ColumnOperators __init__() (sqlalchemy.sql.expression.Select method), method), 295 305 __sub__() (sqlalchemy.sql.expression.ColumnOperators __init__() (sqlalchemy.sql.expression._BindParamClause method), 295 method), 289 __truediv__() (sqlalchemy.sql.expression.ColumnOperators __init__() (sqlalchemy.types.Boolean method), 382 method), 295 __init__() (sqlalchemy.types.Enum method), 383 _declarative_constructor() (in module __init__() (sqlalchemy.types.Float method), 384 sqlalchemy.ext.declarative), 208 __init__() (sqlalchemy.types.Interval method), 384 _parents (sqlalchemy.ext.mutable.MutableBase attribute), __init__() (sqlalchemy.types.LargeBinary method), 384 215 __init__() (sqlalchemy.types.Numeric method), 385 A __init__() (sqlalchemy.types.PickleType method), 386 __init__() (sqlalchemy.types.String method), 387 AbstractConcreteBase (class in __init__() (sqlalchemy.types.TypeDecorator method), sqlalchemy.ext.declarative), 209 393 AbstractType (class in sqlalchemy.types), 400 __init__() (sqlalchemy.types.TypeEngine method), 401 active_history (sqlalchemy.orm.interfaces.AttributeExtension __init__() (sqlalchemy.types.Unicode method), 388 attribute), 245 __init__() (sqlalchemy.types.UnicodeText method), 389 adapt() (sqlalchemy.types.TypeDecorator method), 393 __init__() (sqlalchemy.types.UserDenedType method), adapt() (sqlalchemy.types.TypeEngine method), 401 399 adapt() (sqlalchemy.types.UserDenedType method), 399 __init__() (sqlalchemy.types.Variant method), 404 adapt_operator() (sqlalchemy.types.UserDenedType __init__() (sqlalchemy.util.ScopedRegistry method), 132 method), 399 __invert__() (sqlalchemy.sql.expression.Operators adapted() (sqlalchemy.orm.interfaces.PropComparator method), 304 method), 250 __le__() (sqlalchemy.sql.expression.ColumnOperators adapted() (sqlalchemy.orm.properties.RelationshipProperty.Comparator method), 295 method), 253 __le__() (sqlalchemy.sql.operators.ColumnOperators add() (sqlalchemy.orm.session.Session method), 135 method), 296 add() (sqlalchemy.sql.expression.ColumnCollection __lt__() (sqlalchemy.sql.expression.ColumnOperators method), 292 method), 295 add_all() (sqlalchemy.orm.session.Session method), 135 __lt__() (sqlalchemy.sql.operators.ColumnOperators add_column() (sqlalchemy.orm.query.Query method), method), 296 144 __mod__() (sqlalchemy.sql.expression.ColumnOperators add_columns() (sqlalchemy.orm.query.Query method), method), 296 144 __mul__() (sqlalchemy.sql.expression.ColumnOperators add_entity() (sqlalchemy.orm.query.Query method), 144 method), 295 add_is_dependent_on() (sqlalchemy.schema.Table __ne__() (sqlalchemy.orm.properties.RelationshipProperty.Comparator method), 354 method), 252 add_properties() (sqlalchemy.orm.mapper.Mapper __ne__() (sqlalchemy.sql.expression.ColumnOperators method), 63 method), 294 add_property() (sqlalchemy.orm.mapper.Mapper __ne__() (sqlalchemy.sql.operators.ColumnOperators method), 63 method), 296 AddConstraint (class in sqlalchemy.schema), 381 __neg__() (sqlalchemy.sql.expression.ColumnOperators added (sqlalchemy.orm.attributes.History attribute), 143 method), 295 adds() (sqlalchemy.orm.collections.collection static __or__() (sqlalchemy.sql.expression.Operators method), method), 98 303 adjacency_list (module), 234 __radd__() (sqlalchemy.sql.expression.ColumnOperators after_attach() (sqlalchemy.orm.events.SessionEvents method), 295 method), 179 __rdiv__() (sqlalchemy.sql.expression.ColumnOperators after_attach() (sqlalchemy.orm.interfaces.SessionExtension method), 295 method), 244 __rmul__() (sqlalchemy.sql.expression.ColumnOperators after_begin() (sqlalchemy.orm.events.SessionEvents method), 296 method), 179 __rsub__() (sqlalchemy.sql.expression.ColumnOperators after_begin() (sqlalchemy.orm.interfaces.SessionExtension method), 295 method), 244 Index 495
after_bulk_delete() (sqlalchemy.orm.events.SessionEvents 300 method), 179 alias() (sqlalchemy.sql.expression.Join method), 302 after_bulk_delete() (sqlalchemy.orm.interfaces.SessionExtension aliased (class in sqlalchemy.orm), 158 method), 244 AliasedClass (class in sqlalchemy.orm.util), 158 after_bulk_update() (sqlalchemy.orm.events.SessionEvents all() (sqlalchemy.orm.query.Query method), 145 method), 180 and_() (in module sqlalchemy.sql.expression), 276 after_bulk_update() (sqlalchemy.orm.interfaces.SessionExtension anon_label (sqlalchemy.sql.expression.ColumnElement method), 244 attribute), 293 after_commit() (sqlalchemy.orm.events.SessionEvents AnsiFunction (class in sqlalchemy.sql.functions), 310 method), 180 any() (sqlalchemy.ext.associationproxy.AssociationProxy after_commit() (sqlalchemy.orm.interfaces.SessionExtension method), 192 method), 244 any() (sqlalchemy.orm.interfaces.PropComparator after_congured() (sqlalchemy.orm.events.MapperEvents method), 250 method), 172 any() (sqlalchemy.orm.properties.RelationshipProperty.Comparator after_create() (sqlalchemy.events.DDLEvents method), method), 253 409 append() (sqlalchemy.orm.events.AttributeEvents after_cursor_execute() (sqlalchemy.events.ConnectionEvents method), 170 method), 408 append() (sqlalchemy.orm.interfaces.AttributeExtension after_delete() (sqlalchemy.orm.events.MapperEvents method), 245 method), 172 append_column() (sqlalchemy.schema.Table method), after_delete() (sqlalchemy.orm.interfaces.MapperExtension 354 method), 241 append_column() (sqlalchemy.sql.expression.Select after_drop() (sqlalchemy.events.DDLEvents method), method), 305 409 append_constraint() (sqlalchemy.schema.Table method), after_execute() (sqlalchemy.events.ConnectionEvents 355 method), 408 append_correlation() (sqlalchemy.sql.expression.Select after_ush() (sqlalchemy.orm.events.SessionEvents method), 305 method), 180 append_ddl_listener() (sqlalchemy.schema.MetaData after_ush() (sqlalchemy.orm.interfaces.SessionExtension method), 351 method), 244 append_ddl_listener() (sqlalchemy.schema.Table after_ush_postexec() (sqlalchemy.orm.events.SessionEvents method), 355 method), 180 append_foreign_key() (sqlalchemy.schema.Column after_ush_postexec() (sqlalchemy.orm.interfaces.SessionExtension method), 350 method), 244 append_from() (sqlalchemy.sql.expression.Select after_insert() (sqlalchemy.orm.events.MapperEvents method), 305 method), 172 append_group_by() (sqlalchemy.sql.expression._SelectBase after_insert() (sqlalchemy.orm.interfaces.MapperExtension method), 307 method), 241 append_having() (sqlalchemy.sql.expression.Select after_parent_attach() (sqlalchemy.events.DDLEvents method), 305 method), 410 append_order_by() (sqlalchemy.sql.expression._SelectBase after_rollback() (sqlalchemy.orm.events.SessionEvents method), 307 method), 180 append_prex() (sqlalchemy.sql.expression.Select after_rollback() (sqlalchemy.orm.interfaces.SessionExtension method), 305 method), 244 append_result() (sqlalchemy.orm.events.MapperEvents after_soft_rollback() (sqlalchemy.orm.events.SessionEvents method), 173 method), 180 append_result() (sqlalchemy.orm.interfaces.MapperExtension after_update() (sqlalchemy.orm.events.MapperEvents method), 241 method), 173 append_whereclause() (sqlalchemy.sql.expression.Select after_update() (sqlalchemy.orm.interfaces.MapperExtension method), 305 method), 241 appender() (sqlalchemy.orm.collections.collection static against() (sqlalchemy.schema.DDLElement method), 378 method), 98 Alias (class in sqlalchemy.sql.expression), 289 apply_labels() (sqlalchemy.sql.expression._SelectBase alias() (in module sqlalchemy.sql.expression), 276 method), 307 alias() (sqlalchemy.sql.expression.FromClause method), ArgumentError, 420
496
Index
ARRAY (class in sqlalchemy.dialects.postgresql), 477 before_execute() (sqlalchemy.events.ConnectionEvents as_mutable() (sqlalchemy.ext.mutable.Mutable class method), 408 method), 215 before_ush() (sqlalchemy.orm.events.SessionEvents as_scalar() (sqlalchemy.orm.query.Query method), 145 method), 181 as_scalar() (sqlalchemy.sql.expression._SelectBase before_ush() (sqlalchemy.orm.interfaces.SessionExtension method), 308 method), 244 asc() (in module sqlalchemy.sql.expression), 276 before_insert() (sqlalchemy.orm.events.MapperEvents asc() (sqlalchemy.sql.expression._CompareMixin method), 174 method), 293 before_insert() (sqlalchemy.orm.interfaces.MapperExtension asc() (sqlalchemy.sql.operators.ColumnOperators method), 241 method), 296 before_parent_attach() (sqlalchemy.events.DDLEvents AssertionPool (class in sqlalchemy.pool), 341 method), 410 associate_with() (sqlalchemy.ext.mutable.Mutable class before_update() (sqlalchemy.orm.events.MapperEvents method), 216 method), 175 associate_with_attribute() before_update() (sqlalchemy.orm.interfaces.MapperExtension (sqlalchemy.ext.mutable.Mutable class method), 242 method), 216 begin() (sqlalchemy.engine.base.Connection method), association (module), 234 324 association_proxy() (in module begin() (sqlalchemy.events.ConnectionEvents method), sqlalchemy.ext.associationproxy), 190 408 AssociationProxy (class in begin() (sqlalchemy.interfaces.ConnectionProxy sqlalchemy.ext.associationproxy), 191 method), 418 attr (sqlalchemy.ext.associationproxy.AssociationProxy begin() (sqlalchemy.orm.session.Session method), 135 attribute), 192 begin_nested() (sqlalchemy.engine.base.Connection attribute_instrument() (sqlalchemy.orm.events.InstrumentationEvents method), 324 method), 181 begin_nested() (sqlalchemy.orm.session.Session attribute_mapped_collection() (in module method), 135 sqlalchemy.orm.collections), 97 begin_twophase() (sqlalchemy.engine.base.Connection AttributeEvents (class in sqlalchemy.orm.events), 169 method), 325 AttributeExtension (class in sqlalchemy.orm.interfaces), begin_twophase() (sqlalchemy.events.ConnectionEvents 245 method), 408 autocommit() (sqlalchemy.sql.expression._SelectBase begin_twophase() (sqlalchemy.interfaces.ConnectionProxy method), 308 method), 418 autoush() (sqlalchemy.orm.query.Query method), 145 between() (in module sqlalchemy.sql.expression), 276 between() (sqlalchemy.sql.expression._CompareMixin B method), 293 between() (sqlalchemy.sql.operators.ColumnOperators backref() (in module sqlalchemy.orm), 89 method), 296 base_mapper (sqlalchemy.orm.mapper.Mapper attribute), BFILE (class in sqlalchemy.dialects.oracle), 470 64 BIGINT (class in sqlalchemy.dialects.drizzle), 437 beaker_caching (module), 235 before_commit() (sqlalchemy.orm.events.SessionEvents BIGINT (class in sqlalchemy.dialects.mysql), 456 BIGINT (class in sqlalchemy.types), 389 method), 181 BigInteger (class in sqlalchemy.types), 382 before_commit() (sqlalchemy.orm.interfaces.SessionExtension BINARY (class in sqlalchemy.dialects.mysql), 456 method), 244 before_create() (sqlalchemy.events.DDLEvents method), BINARY (class in sqlalchemy.types), 389 bind (sqlalchemy.ext.sqlsoup.SqlSoup attribute), 231 410 bind (sqlalchemy.schema.DDLElement attribute), 378 before_cursor_execute() (sqlalchemy.events.ConnectionEvents bind (sqlalchemy.schema.Index attribute), 374 method), 408 before_delete() (sqlalchemy.orm.events.MapperEvents bind (sqlalchemy.schema.MetaData attribute), 351 bind (sqlalchemy.schema.Table attribute), 355 method), 174 bind (sqlalchemy.schema.ThreadLocalMetaData atbefore_delete() (sqlalchemy.orm.interfaces.MapperExtension tribute), 356 method), 241 bind (sqlalchemy.sql.expression.Executable attribute), before_drop() (sqlalchemy.events.DDLEvents method), 298 410 Index 497
bind (sqlalchemy.sql.expression.UpdateBase attribute), 309 bind (sqlalchemy.types.SchemaType attribute), 386 bind_mapper() (sqlalchemy.orm.session.Session method), 135 bind_processor() (sqlalchemy.types.TypeDecorator method), 393 bind_processor() (sqlalchemy.types.TypeEngine method), 401 bind_processor() (sqlalchemy.types.UserDenedType method), 399 bind_table() (sqlalchemy.orm.session.Session method), 135 bindparam() (in module sqlalchemy.sql.expression), 276 BIT (class in sqlalchemy.dialects.mssql), 447 BIT (class in sqlalchemy.dialects.mysql), 456 BIT (class in sqlalchemy.dialects.postgresql), 477 BLOB (class in sqlalchemy.dialects.mysql), 456 BLOB (class in sqlalchemy.types), 389 BOOLEAN (class in sqlalchemy.dialects.mysql), 457 BOOLEAN (class in sqlalchemy.types), 389 Boolean (class in sqlalchemy.types), 382 BYTEA (class in sqlalchemy.dialects.postgresql), 477
class_attribute (sqlalchemy.orm.interfaces.MapperProperty attribute), 249 class_instrument() (sqlalchemy.orm.events.InstrumentationEvents method), 181 class_manager (sqlalchemy.orm.mapper.Mapper attribute), 64 class_mapper() (in module sqlalchemy.orm), 62 class_uninstrument() (sqlalchemy.orm.events.InstrumentationEvents method), 181 ClassManager (class in sqlalchemy.orm.instrumentation), 247 ClauseElement (class in sqlalchemy.sql.expression), 290 ClauseList (class in sqlalchemy.sql.expression), 291 clauses (sqlalchemy.sql.expression.FunctionElement attribute), 299 clear() (sqlalchemy.ext.sqlsoup.SqlSoup method), 231 clear() (sqlalchemy.schema.MetaData method), 351 clear() (sqlalchemy.util.ScopedRegistry method), 132 clear_managers() (in module sqlalchemy.pool), 342 clear_mappers() (in module sqlalchemy.orm), 62 CLOB (class in sqlalchemy.types), 389 close() (sqlalchemy.engine.base.Connection method), 325 close() (sqlalchemy.engine.base.ResultProxy method), C 332 close() (sqlalchemy.engine.base.Transaction method), c (sqlalchemy.orm.mapper.Mapper attribute), 64 334 c (sqlalchemy.sql.expression.FromClause attribute), 300 cascade (sqlalchemy.orm.interfaces.MapperProperty at- close() (sqlalchemy.orm.session.Session method), 136 close_all() (sqlalchemy.orm.session.Session class tribute), 249 method), 136 cascade_iterator() (sqlalchemy.orm.interfaces.MapperProperty closed (sqlalchemy.engine.base.Connection attribute), method), 249 325 cascade_iterator() (sqlalchemy.orm.mapper.Mapper coalesce (class in sqlalchemy.sql.functions), 311 method), 64 coerce_compared_value() case() (in module sqlalchemy.sql.expression), 277 (sqlalchemy.types.TypeDecorator method), cast() (in module sqlalchemy.sql.expression), 278 393 changed() (sqlalchemy.ext.mutable.Mutable method), collate() (in module sqlalchemy.sql.expression), 278 216 (sqlalchemy.sql.expression._CompareMixin changed() (sqlalchemy.ext.mutable.MutableComposite collate() method), 293 method), 216 collate() (sqlalchemy.sql.operators.ColumnOperators CHAR (class in sqlalchemy.dialects.drizzle), 437 method), 296 CHAR (class in sqlalchemy.dialects.mssql), 447 collection (class in sqlalchemy.orm.collections), 97 CHAR (class in sqlalchemy.dialects.mysql), 457 collection_adapter() (in module CHAR (class in sqlalchemy.types), 389 sqlalchemy.orm.collections), 100 char_length (class in sqlalchemy.sql.functions), 311 Column (class in sqlalchemy.schema), 348 CheckConstraint (class in sqlalchemy.schema), 370 column (sqlalchemy.schema.ForeignKey attribute), 371 checkin() (sqlalchemy.events.PoolEvents method), 407 checkin() (sqlalchemy.interfaces.PoolListener method), column() (in module sqlalchemy.sql.expression), 278 column() (sqlalchemy.sql.expression.Select method), 305 420 column_descriptions (sqlalchemy.orm.query.Query atcheckout() (sqlalchemy.events.PoolEvents method), 407 tribute), 145 checkout() (sqlalchemy.interfaces.PoolListener method), column_mapped_collection() (in module 420 sqlalchemy.orm.collections), 100 CIDR (class in sqlalchemy.dialects.postgresql), 477 column_property() (in module sqlalchemy.orm), 42 CircularDependencyError, 421 column_reect() (sqlalchemy.events.DDLEvents class_ (sqlalchemy.orm.mapper.Mapper attribute), 64 498 Index
method), 410 compare_values() (sqlalchemy.types.TypeEngine ColumnClause (class in sqlalchemy.sql.expression), 291 method), 401 ColumnCollection (class in sqlalchemy.sql.expression), compare_values() (sqlalchemy.types.UserDenedType 292 method), 399 ColumnCollectionConstraint (class in compile() (sqlalchemy.engine.base.Compiled method), sqlalchemy.schema), 370 424 ColumnDefault (class in sqlalchemy.schema), 364 compile() (sqlalchemy.orm.mapper.Mapper method), 64 ColumnElement (class in sqlalchemy.sql.expression), 292 compile() (sqlalchemy.sql.expression.ClauseElement ColumnOperators (class in sqlalchemy.sql.operators), 294 method), 290 ColumnProperty (class in sqlalchemy.orm.properties), compile() (sqlalchemy.types.TypeDecorator method), 394 247 compile() (sqlalchemy.types.TypeEngine method), 401 columns (sqlalchemy.orm.mapper.Mapper attribute), 64 compile() (sqlalchemy.types.UserDenedType method), columns (sqlalchemy.sql.expression.FromClause at399 tribute), 300 compile_mappers() (in module sqlalchemy.orm), 62 columns (sqlalchemy.sql.expression.FunctionElement at- Compiled (class in sqlalchemy.engine.base), 424 tribute), 299 compiled (sqlalchemy.orm.mapper.Mapper attribute), 64 commit() (sqlalchemy.engine.base.Transaction method), CompileError, 421 335 composite() (in module sqlalchemy.orm), 55 commit() (sqlalchemy.events.ConnectionEvents method), CompositeProperty (class in 408 sqlalchemy.orm.descriptor_props), 248 commit() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 CompoundSelect (class in sqlalchemy.sql.expression), commit() (sqlalchemy.interfaces.ConnectionProxy 298 method), 419 concat (class in sqlalchemy.sql.functions), 311 commit() (sqlalchemy.orm.session.Session method), 136 concat() (sqlalchemy.sql.operators.ColumnOperators commit() (sqlalchemy.orm.state.InstanceState method), method), 296 248 Concatenable (class in sqlalchemy.types), 403 commit_all() (sqlalchemy.orm.state.InstanceState concrete (sqlalchemy.orm.mapper.Mapper attribute), 65 method), 248 ConcreteBase (class in sqlalchemy.ext.declarative), 209 commit_twophase() (sqlalchemy.events.ConnectionEvents ConcurrentModicationError (in module method), 408 sqlalchemy.orm.exc), 246 commit_twophase() (sqlalchemy.interfaces.ConnectionProxy congure() (sqlalchemy.orm.scoping.ScopedSession method), 419 method), 131 common_parent() (sqlalchemy.orm.mapper.Mapper congure_mappers() (in module sqlalchemy.orm), 62 method), 64 congured (sqlalchemy.orm.mapper.Mapper attribute), comparable_property() (in module sqlalchemy.orm), 52 65 comparable_using() (in module connect() (sqlalchemy.engine.base.Connectable method), sqlalchemy.ext.declarative), 209 329 Comparator (class in sqlalchemy.ext.hybrid), 226 connect() (sqlalchemy.engine.base.Connection method), comparator() (sqlalchemy.ext.hybrid.hybrid_property 325 method), 226 connect() (sqlalchemy.engine.base.Dialect method), 427 compare() (sqlalchemy.orm.interfaces.MapperProperty connect() (sqlalchemy.engine.base.Engine method), 329 method), 249 connect() (sqlalchemy.engine.default.DefaultDialect compare() (sqlalchemy.sql.expression._BindParamClause method), 425 method), 289 connect() (sqlalchemy.events.PoolEvents method), 407 compare() (sqlalchemy.sql.expression.ClauseElement connect() (sqlalchemy.interfaces.PoolListener method), method), 290 420 compare() (sqlalchemy.sql.expression.ClauseList connect() (sqlalchemy.pool.Pool method), 340 method), 291 Connectable (class in sqlalchemy.engine.base), 329 compare() (sqlalchemy.sql.expression.ColumnElement Connection (class in sqlalchemy.engine.base), 324 method), 293 connection (sqlalchemy.engine.base.Connection atcompare_values() (sqlalchemy.types.MutableType tribute), 325 method), 403 connection (sqlalchemy.engine.default.DefaultExecutionContext compare_values() (sqlalchemy.types.TypeDecorator attribute), 430 method), 394 connection() (sqlalchemy.ext.sqlsoup.SqlSoup method),
Index
499
232 329 connection() (sqlalchemy.orm.session.Session method), create() (sqlalchemy.engine.base.Connection method), 136 325 ConnectionEvents (class in sqlalchemy.events), 407 create() (sqlalchemy.engine.base.Engine method), 330 ConnectionProxy (class in sqlalchemy.interfaces), 418 create() (sqlalchemy.schema.Index method), 374 Constraint (class in sqlalchemy.schema), 370 create() (sqlalchemy.schema.Sequence method), 366 construct_params() (sqlalchemy.engine.base.Compiled create() (sqlalchemy.schema.Table method), 355 method), 424 create() (sqlalchemy.types.SchemaType method), 386 construct_params() (sqlalchemy.sql.compiler.SQLCompiler create_all() (sqlalchemy.schema.MetaData method), 351 method), 433 create_connect_args() (sqlalchemy.engine.base.Dialect contains() (sqlalchemy.ext.associationproxy.AssociationProxy method), 427 method), 192 create_connect_args() (sqlalchemy.engine.default.DefaultDialect contains() (sqlalchemy.orm.properties.RelationshipProperty.Comparator method), 425 method), 253 create_cursor() (sqlalchemy.engine.base.ExecutionContext contains() (sqlalchemy.sql.expression._CompareMixin method), 431 method), 293 create_cursor() (sqlalchemy.engine.default.DefaultExecutionContext contains() (sqlalchemy.sql.operators.ColumnOperators method), 430 method), 296 create_engine() (in module sqlalchemy), 314 contains_alias() (in module sqlalchemy.orm), 166 create_instance() (sqlalchemy.orm.events.MapperEvents contains_eager() (in module sqlalchemy.orm), 166 method), 175 contextual_connect() (sqlalchemy.engine.base.Connectable create_instance() (sqlalchemy.orm.interfaces.MapperExtension method), 329 method), 242 contextual_connect() (sqlalchemy.engine.base.Connection create_row_processor() (sqlalchemy.orm.interfaces.MapperProperty method), 325 method), 249 contextual_connect() (sqlalchemy.engine.base.Engine create_xid() (sqlalchemy.engine.base.Dialect method), method), 329 427 converter() (sqlalchemy.orm.collections.collection static create_xid() (sqlalchemy.engine.default.DefaultDialect method), 99 method), 425 copy() (sqlalchemy.schema.Column method), 350 CreateIndex (class in sqlalchemy.schema), 381 copy() (sqlalchemy.schema.ForeignKey method), 371 CreateSequence (class in sqlalchemy.schema), 381 copy() (sqlalchemy.types.TypeDecorator method), 394 CreateTable (class in sqlalchemy.schema), 381 copy_value() (sqlalchemy.types.MutableType method), current_date (class in sqlalchemy.sql.functions), 311 403 current_time (class in sqlalchemy.sql.functions), 311 copy_value() (sqlalchemy.types.TypeDecorator method), current_timestamp (class in sqlalchemy.sql.functions), 394 311 copy_value() (sqlalchemy.types.TypeEngine method), current_user (class in sqlalchemy.sql.functions), 311 401 cursor_execute() (sqlalchemy.interfaces.ConnectionProxy copy_value() (sqlalchemy.types.UserDenedType method), 419 method), 399 custom_attributes (module), 234 correlate() (sqlalchemy.orm.query.Query method), 145 correlate() (sqlalchemy.sql.expression.Select method), D 305 DatabaseError, 421 correspond_on_equivalents() DataError, 421 (sqlalchemy.sql.expression.FromClause DATE (class in sqlalchemy.dialects.mysql), 457 method), 300 DATE (class in sqlalchemy.dialects.sqlite), 483 corresponding_column() (sqlalchemy.sql.expression.FromClause DATE (class in sqlalchemy.types), 389 method), 300 Date (class in sqlalchemy.types), 382 count (class in sqlalchemy.sql.functions), 311 DATETIME (class in sqlalchemy.dialects.mysql), 457 count() (sqlalchemy.orm.query.Query method), 146 DATETIME (class in sqlalchemy.dialects.sqlite), 483 count() (sqlalchemy.sql.expression.FromClause method), DATETIME (class in sqlalchemy.types), 389 301 DateTime (class in sqlalchemy.types), 383 count() (sqlalchemy.sql.expression.TableClause method), DATETIME2 (class in sqlalchemy.dialects.mssql), 448 309 DATETIMEOFFSET (class in create() (sqlalchemy.engine.base.Connectable method), sqlalchemy.dialects.mssql), 448 500 Index
DBAPIError, 421 dialect_description (sqlalchemy.engine.default.DefaultDialect DDL (class in sqlalchemy.schema), 380 attribute), 425 ddl_compiler (sqlalchemy.engine.default.DefaultDialect dialect_impl() (sqlalchemy.types.TypeDecorator attribute), 425 method), 394 DDLCompiler (class in sqlalchemy.sql.compiler), 424 dialect_impl() (sqlalchemy.types.TypeEngine method), DDLElement (class in sqlalchemy.schema), 378 401 DDLEvents (class in sqlalchemy.events), 408 dialect_impl() (sqlalchemy.types.UserDenedType DECIMAL (class in sqlalchemy.dialects.drizzle), 437 method), 399 DECIMAL (class in sqlalchemy.dialects.mysql), 457 dict_getter() (sqlalchemy.orm.interfaces.InstrumentationManager DECIMAL (class in sqlalchemy.types), 389 method), 182 declarative_base() (in module dirty (sqlalchemy.orm.session.Session attribute), 137 sqlalchemy.ext.declarative), 207 DisconnectionError, 421 declared_attr (class in sqlalchemy.ext.declarative), 207 dispose() (sqlalchemy.engine.base.Engine method), 330 default_from() (sqlalchemy.sql.compiler.SQLCompiler dispose() (sqlalchemy.orm.instrumentation.ClassManager method), 433 method), 247 default_schema_name (sqlalchemy.engine.reection.Inspector dispose() (sqlalchemy.orm.interfaces.InstrumentationManager attribute), 358 method), 182 DefaultClause (class in sqlalchemy.schema), 364 dispose() (sqlalchemy.pool.Pool method), 340 DefaultDialect (class in sqlalchemy.engine.default), 424 dispose() (sqlalchemy.schema.ThreadLocalMetaData DefaultExecutionContext (class in method), 356 sqlalchemy.engine.default), 430 distinct() (in module sqlalchemy.sql.expression), 279 DefaultGenerator (class in sqlalchemy.schema), 365 distinct() (sqlalchemy.orm.query.Query method), 147 defer() (in module sqlalchemy.orm), 45 distinct() (sqlalchemy.sql.expression._CompareMixin deferred() (in module sqlalchemy.orm), 45 method), 293 dene_constraint_remote_table() distinct() (sqlalchemy.sql.expression.Select method), 305 (sqlalchemy.sql.compiler.DDLCompiler distinct() (sqlalchemy.sql.operators.ColumnOperators method), 424 method), 296 del_attribute() (in module sqlalchemy.orm.attributes), 142 do_begin() (sqlalchemy.engine.base.Dialect method), 427 Delete (class in sqlalchemy.sql.expression), 298 do_begin() (sqlalchemy.engine.default.DefaultDialect delete() (in module sqlalchemy.sql.expression), 278 method), 425 delete() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 do_begin_twophase() (sqlalchemy.engine.base.Dialect delete() (sqlalchemy.orm.query.Query method), 146 method), 427 delete() (sqlalchemy.orm.session.Session method), 137 do_commit() (sqlalchemy.engine.base.Dialect method), delete() (sqlalchemy.sql.expression.TableClause method), 427 309 do_commit() (sqlalchemy.engine.default.DefaultDialect deleted (sqlalchemy.orm.attributes.History attribute), 143 method), 425 deleted (sqlalchemy.orm.session.Session attribute), 137 do_commit_twophase() (sqlalchemy.engine.base.Dialect deleter() (sqlalchemy.ext.hybrid.hybrid_property method), 427 method), 226 do_execute() (sqlalchemy.engine.base.Dialect method), denormalize_name() (sqlalchemy.engine.base.Dialect 427 method), 427 do_execute() (sqlalchemy.engine.default.DefaultDialect desc() (in module sqlalchemy.sql.expression), 279 method), 425 desc() (sqlalchemy.sql.expression._CompareMixin do_executemany() (sqlalchemy.engine.base.Dialect method), 293 method), 428 desc() (sqlalchemy.sql.operators.ColumnOperators do_executemany() (sqlalchemy.engine.default.DefaultDialect method), 296 method), 425 description (sqlalchemy.sql.expression.FromClause at- do_init() (sqlalchemy.orm.descriptor_props.CompositeProperty tribute), 301 method), 248 Deserializer() (in module sqlalchemy.ext.serializer), 418 do_init() (sqlalchemy.orm.interfaces.MapperProperty detach() (sqlalchemy.engine.base.Connection method), method), 250 325 do_prepare_twophase() (sqlalchemy.engine.base.Dialect DetachedInstanceError, 246 method), 428 Dialect (class in sqlalchemy.engine.base), 426 do_recover_twophase() (sqlalchemy.engine.base.Dialect method), 428
Index
501
do_release_savepoint() (sqlalchemy.engine.base.Dialect enable_eagerloads() (sqlalchemy.orm.query.Query method), 428 method), 147 do_release_savepoint() (sqlalchemy.engine.default.DefaultDialect endswith() (sqlalchemy.sql.expression._CompareMixin method), 425 method), 293 do_rollback() (sqlalchemy.engine.base.Dialect method), endswith() (sqlalchemy.sql.operators.ColumnOperators 428 method), 296 do_rollback() (sqlalchemy.engine.default.DefaultDialect Engine (class in sqlalchemy.engine.base), 329 method), 425 engine (sqlalchemy.ext.sqlsoup.SqlSoup attribute), 232 do_rollback_to_savepoint() engine_from_cong() (in module sqlalchemy), 316 (sqlalchemy.engine.base.Dialect method), entity() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 428 ENUM (class in sqlalchemy.dialects.drizzle), 438 do_rollback_to_savepoint() ENUM (class in sqlalchemy.dialects.mysql), 458 (sqlalchemy.engine.default.DefaultDialect ENUM (class in sqlalchemy.dialects.postgresql), 478 method), 425 Enum (class in sqlalchemy.types), 383 do_rollback_twophase() (sqlalchemy.engine.base.Dialect escape_literal_column() (sqlalchemy.sql.compiler.SQLCompiler method), 428 method), 433 do_savepoint() (sqlalchemy.engine.base.Dialect method), except_() (in module sqlalchemy.sql.expression), 279 428 except_() (sqlalchemy.orm.query.Query method), 147 do_savepoint() (sqlalchemy.engine.default.DefaultDialect except_() (sqlalchemy.sql.expression.Select method), 306 method), 425 except_all() (in module sqlalchemy.sql.expression), 279 DontWrapMixin (class in sqlalchemy.exc), 421 except_all() (sqlalchemy.orm.query.Query method), 147 DOUBLE (class in sqlalchemy.dialects.drizzle), 437 except_all() (sqlalchemy.sql.expression.Select method), DOUBLE (class in sqlalchemy.dialects.mysql), 458 306 DOUBLE_PRECISION (class in Executable (class in sqlalchemy.sql.expression), 298 sqlalchemy.dialects.oracle), 470 execute() (sqlalchemy.engine.base.Compiled method), DOUBLE_PRECISION (class in 424 sqlalchemy.dialects.postgresql), 478 execute() (sqlalchemy.engine.base.Connectable method), driver (sqlalchemy.engine.base.Engine attribute), 330 329 drop() (sqlalchemy.engine.base.Connectable method), execute() (sqlalchemy.engine.base.Connection method), 329 326 drop() (sqlalchemy.engine.base.Connection method), 325 execute() (sqlalchemy.engine.base.Engine method), 330 drop() (sqlalchemy.engine.base.Engine method), 330 execute() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 drop() (sqlalchemy.schema.Index method), 374 execute() (sqlalchemy.interfaces.ConnectionProxy drop() (sqlalchemy.schema.Sequence method), 366 method), 419 drop() (sqlalchemy.schema.Table method), 355 execute() (sqlalchemy.orm.session.Session method), 137 drop() (sqlalchemy.types.SchemaType method), 386 execute() (sqlalchemy.schema.DDLElement method), drop_all() (sqlalchemy.schema.MetaData method), 351 378 DropConstraint (class in sqlalchemy.schema), 382 execute() (sqlalchemy.sql.expression.ClauseElement DropIndex (class in sqlalchemy.schema), 381 method), 290 DropSequence (class in sqlalchemy.schema), 381 execute() (sqlalchemy.sql.expression.Executable DropTable (class in sqlalchemy.schema), 381 method), 298 dumps() (in module sqlalchemy.ext.serializer), 418 execute() (sqlalchemy.sql.expression.FunctionElement dynamic_dict (module), 236 method), 299 dynamic_loader() (in module sqlalchemy.orm), 89 execute_at() (sqlalchemy.schema.DDLElement method), 378 E execute_if() (sqlalchemy.schema.DDLElement method), 379 eagerload() (in module sqlalchemy.orm), 167 execute_sequence_format eagerload_all() (in module sqlalchemy.orm), 167 (sqlalchemy.engine.default.DefaultDialect echo (sqlalchemy.engine.base.Engine attribute), 330 attribute), 425 elementtree (module), 239 execution_ctx_cls (sqlalchemy.engine.default.DefaultDialect empty() (sqlalchemy.orm.attributes.History method), 143 attribute), 425 enable_assertions() (sqlalchemy.orm.query.Query execution_options() (sqlalchemy.engine.base.Connection method), 147 method), 327
502
Index
execution_options() (sqlalchemy.orm.query.Query method), 147 execution_options() (sqlalchemy.sql.expression.Executable method), 298 ExecutionContext (class in sqlalchemy.engine.base), 431 exists() (in module sqlalchemy.sql.expression), 279 exists() (sqlalchemy.schema.Table method), 355 expire() (sqlalchemy.orm.events.InstanceEvents method), 177 expire() (sqlalchemy.orm.session.Session method), 137 expire_all() (sqlalchemy.orm.session.Session method), 138 expire_attribute_pre_commit() (sqlalchemy.orm.state.InstanceState method), 248 expired_attributes (sqlalchemy.orm.state.InstanceState attribute), 248 expression() (sqlalchemy.ext.hybrid.hybrid_method method), 225 expression() (sqlalchemy.ext.hybrid.hybrid_property method), 226 expunge() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 expunge() (sqlalchemy.orm.session.Session method), 138 expunge_all() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 expunge_all() (sqlalchemy.orm.session.Session method), 138 extract() (in module sqlalchemy.sql.expression), 279
ush() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 ush() (sqlalchemy.orm.session.Session method), 138 FlushError, 246 foreign_keys (sqlalchemy.sql.expression.FromClause attribute), 301 ForeignKey (class in sqlalchemy.schema), 370 ForeignKeyConstraint (class in sqlalchemy.schema), 372 format_column() (sqlalchemy.sql.compiler.IdentierPreparer method), 432 format_table() (sqlalchemy.sql.compiler.IdentierPreparer method), 432 format_table_seq() (sqlalchemy.sql.compiler.IdentierPreparer method), 432 from_engine() (sqlalchemy.engine.reection.Inspector class method), 358 from_self() (sqlalchemy.orm.query.Query method), 148 from_statement() (sqlalchemy.orm.query.Query method), 148 FromClause (class in sqlalchemy.sql.expression), 300 froms (sqlalchemy.sql.expression.Select attribute), 306 func (in module sqlalchemy.sql.expression), 280 func (sqlalchemy.engine.base.Engine attribute), 330 Function (class in sqlalchemy.sql.expression), 300 FunctionElement (class in sqlalchemy.sql.expression), 299
G
generic_associations (module), 236 GenericFunction (class in sqlalchemy.sql.functions), 310 get() (sqlalchemy.orm.query.Query method), 148 get_attribute() (in module sqlalchemy.orm.attributes), 142 get_bind() (sqlalchemy.orm.session.Session method), 139 get_children() (sqlalchemy.schema.Column method), 350 get_children() (sqlalchemy.schema.Table method), 355 get_children() (sqlalchemy.sql.expression.ClauseElement method), 290 get_children() (sqlalchemy.sql.expression.Select method), 306 get_columns() (sqlalchemy.engine.base.Dialect method), 428 get_columns() (sqlalchemy.engine.reection.Inspector method), 358 get_dbapi_type() (sqlalchemy.types.TypeDecorator method), 394 get_dbapi_type() (sqlalchemy.types.TypeEngine method), 401 get_dbapi_type() (sqlalchemy.types.UserDenedType method), 399 get_dialect() (sqlalchemy.engine.url.URL method), 317 get_foreign_keys() (sqlalchemy.engine.base.Dialect method), 428 get_foreign_keys() (sqlalchemy.engine.reection.Inspector method), 358 get_history() (in module sqlalchemy.orm.attributes), 142 503
F
false() (in module sqlalchemy.sql.expression), 280 fetchall() (sqlalchemy.engine.base.ResultProxy method), 332 FetchedValue (class in sqlalchemy.schema), 365 fetchmany() (sqlalchemy.engine.base.ResultProxy method), 332 fetchone() (sqlalchemy.engine.base.ResultProxy method), 332 lter() (sqlalchemy.orm.query.Query method), 147 lter_by() (sqlalchemy.orm.query.Query method), 148 rst() (sqlalchemy.engine.base.ResultProxy method), 332 rst() (sqlalchemy.orm.query.Query method), 148 rst_connect() (sqlalchemy.events.PoolEvents method), 407 rst_connect() (sqlalchemy.interfaces.PoolListener method), 420 rst_init() (sqlalchemy.orm.events.InstanceEvents method), 178 ag_modied() (in module sqlalchemy.orm.attributes), 143 FLOAT (class in sqlalchemy.dialects.drizzle), 438 FLOAT (class in sqlalchemy.dialects.mysql), 459 FLOAT (class in sqlalchemy.types), 389 Float (class in sqlalchemy.types), 383 Index
get_history() (sqlalchemy.orm.descriptor_props.CompositeProperty method), 429 method), 248 get_view_names() (sqlalchemy.engine.reection.Inspector get_indexes() (sqlalchemy.engine.base.Dialect method), method), 359 428 graphs (module), 236 get_indexes() (sqlalchemy.engine.reection.Inspector group_by() (sqlalchemy.orm.query.Query method), 148 method), 359 group_by() (sqlalchemy.sql.expression._SelectBase get_insert_default() (sqlalchemy.engine.default.DefaultExecutionContext method), 308 method), 430 H get_instance_dict() (sqlalchemy.orm.interfaces.InstrumentationManager method), 182 handle_dbapi_exception() get_isolation_level() (sqlalchemy.engine.base.Dialect (sqlalchemy.engine.base.ExecutionContext method), 429 method), 431 get_lastrowid() (sqlalchemy.engine.default.DefaultExecutionContext handle_dbapi_exception() method), 430 (sqlalchemy.engine.default.DefaultExecutionContext get_pk_constraint() (sqlalchemy.engine.base.Dialect method), 430 method), 429 has() (sqlalchemy.ext.associationproxy.AssociationProxy get_pk_constraint() (sqlalchemy.engine.default.DefaultDialect method), 192 method), 425 has() (sqlalchemy.orm.interfaces.PropComparator get_pk_constraint() (sqlalchemy.engine.reection.Inspector method), 251 method), 359 has() (sqlalchemy.orm.properties.RelationshipProperty.Comparator get_pool_class() (sqlalchemy.engine.default.DefaultDialect method), 254 class method), 425 has() (sqlalchemy.util.ScopedRegistry method), 132 get_primary_keys() (sqlalchemy.engine.base.Dialect has_changes() (sqlalchemy.orm.attributes.History method), 429 method), 143 get_primary_keys() (sqlalchemy.engine.reection.Inspector has_inherited_table() (in module method), 359 sqlalchemy.ext.declarative), 208 get_property() (sqlalchemy.orm.mapper.Mapper method), has_key() (sqlalchemy.engine.base.RowProxy method), 65 334 get_property_by_column() has_parent() (sqlalchemy.orm.instrumentation.ClassManager (sqlalchemy.orm.mapper.Mapper method), method), 247 65 has_sequence() (sqlalchemy.engine.base.Dialect get_referent() (sqlalchemy.schema.ForeignKey method), method), 429 371 has_table() (sqlalchemy.engine.base.Dialect method), get_result_proxy() (sqlalchemy.engine.default.DefaultExecutionContext 429 method), 430 having() (sqlalchemy.orm.query.Query method), 149 get_rowcount() (sqlalchemy.engine.base.ExecutionContext having() (sqlalchemy.sql.expression.Select method), 306 method), 431 History (class in sqlalchemy.orm.attributes), 143 get_schema_names() (sqlalchemy.engine.reection.Inspector hybrid_method (class in sqlalchemy.ext.hybrid), 225 method), 359 hybrid_property (class in sqlalchemy.ext.hybrid), 225 get_select_precolumns() (sqlalchemy.sql.compiler.SQLCompiler method), 433 I get_table_names() (sqlalchemy.engine.base.Dialect IdentierError, 422 method), 429 IdentierPreparer (class in sqlalchemy.sql.compiler), 432 get_table_names() (sqlalchemy.engine.reection.Inspector identity_key() (in module sqlalchemy.orm.util), 62 method), 359 identity_key_from_instance() get_table_options() (sqlalchemy.engine.reection.Inspector (sqlalchemy.orm.mapper.Mapper method), method), 359 65 get_update_default() (sqlalchemy.engine.default.DefaultExecutionContext identity_key_from_primary_key() method), 430 (sqlalchemy.orm.mapper.Mapper method), get_view_denition() (sqlalchemy.engine.base.Dialect 65 method), 429 identity_key_from_row() get_view_denition() (sqlalchemy.engine.reection.Inspector (sqlalchemy.orm.mapper.Mapper method), method), 359 65 get_view_names() (sqlalchemy.engine.base.Dialect
504
Index
ilike()
(sqlalchemy.sql.operators.ColumnOperators InstanceEvents (class in sqlalchemy.orm.events), 177 method), 297 instances() (sqlalchemy.orm.query.Query method), 149 IMAGE (class in sqlalchemy.dialects.mssql), 448 InstanceState (class in sqlalchemy.orm.state), 248 impl (sqlalchemy.types.Interval attribute), 384 instrument_attribute() (sqlalchemy.orm.interfaces.InstrumentationManager impl (sqlalchemy.types.PickleType attribute), 386 method), 182 in_() (sqlalchemy.orm.properties.RelationshipProperty.Comparator instrument_class() (sqlalchemy.orm.events.MapperEvents method), 254 method), 176 in_() (sqlalchemy.sql.expression._CompareMixin instrument_class() (sqlalchemy.orm.interfaces.MapperExtension method), 294 method), 243 in_() (sqlalchemy.sql.operators.ColumnOperators instrument_collection_class() method), 297 (sqlalchemy.orm.interfaces.InstrumentationManager in_transaction() (sqlalchemy.engine.base.Connection method), 182 method), 327 instrument_declarative() (in module Index (class in sqlalchemy.schema), 374 sqlalchemy.ext.declarative), 209 INET (class in sqlalchemy.dialects.postgresql), 478 InstrumentationEvents (class in sqlalchemy.orm.events), info (sqlalchemy.engine.base.Connection attribute), 327 181 inheritance (module), 237 InstrumentationManager (class in inherits (sqlalchemy.orm.mapper.Mapper attribute), 65 sqlalchemy.orm.interfaces), 182 init() (sqlalchemy.orm.events.InstanceEvents method), INT (in module sqlalchemy.types), 390 178 INTEGER (class in sqlalchemy.dialects.drizzle), 439 init() (sqlalchemy.orm.interfaces.MapperProperty INTEGER (class in sqlalchemy.dialects.mysql), 459 method), 250 INTEGER (class in sqlalchemy.types), 390 init_collection() (in module sqlalchemy.orm.attributes), Integer (class in sqlalchemy.types), 384 142 IntegrityError, 422 init_failed() (sqlalchemy.orm.interfaces.MapperExtension InterfaceError, 422 method), 242 InternalError, 422 init_failure() (sqlalchemy.orm.events.InstanceEvents internally_instrumented() method), 178 (sqlalchemy.orm.collections.collection static init_instance() (sqlalchemy.orm.interfaces.MapperExtension method), 99 method), 242 intersect() (in module sqlalchemy.sql.expression), 281 initialize() (sqlalchemy.engine.base.Dialect method), 429 intersect() (sqlalchemy.orm.query.Query method), 149 initialize() (sqlalchemy.engine.default.DefaultDialect intersect() (sqlalchemy.sql.expression.Select method), method), 425 306 initialize() (sqlalchemy.orm.state.InstanceState method), intersect_all() (in module sqlalchemy.sql.expression), 281 249 intersect_all() (sqlalchemy.orm.query.Query method), initialize_instance_dict() (sqlalchemy.orm.interfaces.InstrumentationManager 149 method), 182 intersect_all() (sqlalchemy.sql.expression.Select method), inner_columns (sqlalchemy.sql.expression.Select at306 tribute), 306 INTERVAL (class in sqlalchemy.dialects.oracle), 470 Insert (class in sqlalchemy.sql.expression), 301 INTERVAL (class in sqlalchemy.dialects.postgresql), 479 insert() (in module sqlalchemy.sql.expression), 280 Interval (class in sqlalchemy.types), 384 insert() (sqlalchemy.sql.expression.TableClause method), invalidate() (sqlalchemy.engine.base.Connection 309 method), 328 inserted_primary_key (sqlalchemy.engine.base.ResultProxy invalidated (sqlalchemy.engine.base.Connection atattribute), 332 tribute), 328 Inspector (class in sqlalchemy.engine.reection), 358 InvalidRequestError, 422 install_descriptor() (sqlalchemy.orm.interfaces.InstrumentationManager is_active (sqlalchemy.orm.session.Session attribute), 139 method), 182 is_bound() (sqlalchemy.schema.MetaData method), 352 install_member() (sqlalchemy.orm.interfaces.InstrumentationManager is_bound() (sqlalchemy.schema.ThreadLocalMetaData method), 182 method), 356 install_state() (sqlalchemy.orm.interfaces.InstrumentationManager is_crud (sqlalchemy.engine.default.DefaultExecutionContext method), 182 attribute), 430 instance_state() (in module sqlalchemy.orm.attributes), is_derived_from() (sqlalchemy.sql.expression.FromClause 143 method), 301
Index
505
is_disconnect() (sqlalchemy.engine.base.Dialect method), 429 is_disconnect() (sqlalchemy.engine.default.DefaultDialect method), 425 is_insert (sqlalchemy.engine.base.ResultProxy attribute), 333 is_modied() (sqlalchemy.orm.session.Session method), 139 is_mutable() (sqlalchemy.types.MutableType method), 403 is_mutable() (sqlalchemy.types.PickleType method), 386 is_mutable() (sqlalchemy.types.TypeDecorator method), 394 is_mutable() (sqlalchemy.types.TypeEngine method), 401 is_mutable() (sqlalchemy.types.UserDenedType method), 399 is_primary() (sqlalchemy.orm.interfaces.MapperProperty method), 250 isa() (sqlalchemy.orm.mapper.Mapper method), 65 items() (sqlalchemy.engine.base.RowProxy method), 334 iterate_properties (sqlalchemy.orm.mapper.Mapper attribute), 65 iterator() (sqlalchemy.orm.collections.collection static method), 99
last_inserted_ids() (sqlalchemy.engine.base.ResultProxy method), 333 last_inserted_params() (sqlalchemy.engine.base.ResultProxy method), 333 last_updated_params() (sqlalchemy.engine.base.ResultProxy method), 333 lastrow_has_defaults() (sqlalchemy.engine.base.ExecutionContext method), 431 lastrow_has_defaults() (sqlalchemy.engine.base.ResultProxy method), 333 lastrow_has_defaults() (sqlalchemy.engine.default.DefaultExecutionContext method), 430 lastrowid (sqlalchemy.engine.base.ResultProxy attribute), 333 lazyload() (in module sqlalchemy.orm), 168 like() (sqlalchemy.sql.operators.ColumnOperators method), 297 limit() (sqlalchemy.orm.query.Query method), 153 limit() (sqlalchemy.sql.expression._SelectBase method), 308 link() (sqlalchemy.orm.collections.collection static method), 99 listen() (in module sqlalchemy.event), 405 listens_for() (in module sqlalchemy.event), 406 literal() (in module sqlalchemy.sql.expression), 282 J literal_column() (in module sqlalchemy.sql.expression), 282 Join (class in sqlalchemy.sql.expression), 302 load() (sqlalchemy.orm.events.InstanceEvents method), join() (in module sqlalchemy.orm), 159 178 join() (in module sqlalchemy.sql.expression), 281 load_dialect_impl() (sqlalchemy.types.TypeDecorator join() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 method), 394 join() (sqlalchemy.orm.query.Query method), 149 join() (sqlalchemy.sql.expression.FromClause method), loads() (in module sqlalchemy.ext.serializer), 418 local_attr (sqlalchemy.ext.associationproxy.AssociationProxy 301 attribute), 192 joinedload() (in module sqlalchemy.orm), 167 local_table (sqlalchemy.orm.mapper.Mapper attribute), joinedload_all() (in module sqlalchemy.orm), 167 65 localtime (class in sqlalchemy.sql.functions), 311 K localtimestamp (class in sqlalchemy.sql.functions), 311 key (sqlalchemy.schema.Table attribute), 355 keys() (sqlalchemy.engine.base.ResultProxy method), locate_all_froms (sqlalchemy.sql.expression.Select attribute), 306 333 LONG (class in sqlalchemy.dialects.oracle), 471 keys() (sqlalchemy.engine.base.RowProxy method), 334 LONGBLOB (class in sqlalchemy.dialects.mysql), 459 LONGTEXT (class in sqlalchemy.dialects.mysql), 459 L label() (in module sqlalchemy.sql.expression), 282 M label() (sqlalchemy.orm.query.Query method), 153 label() (sqlalchemy.sql.expression._CompareMixin MACADDR (class in sqlalchemy.dialects.postgresql), 479 method), 294 make_transient() (in module sqlalchemy.orm.session), label() (sqlalchemy.sql.expression._SelectBase method), 142 308 manage() (in module sqlalchemy.pool), 342 label_select_column() (sqlalchemy.sql.compiler.SQLCompiler manage() (sqlalchemy.orm.instrumentation.ClassManager method), 433 method), 247 large_collection (module), 237 manage() (sqlalchemy.orm.interfaces.InstrumentationManager LargeBinary (class in sqlalchemy.types), 384 method), 182 506 Index
manager_getter() (sqlalchemy.orm.interfaces.InstrumentationManager NestedTransaction (class in sqlalchemy.engine.base), 331 method), 182 new (sqlalchemy.orm.session.Session attribute), 140 manager_of_class() (in module next_value (class in sqlalchemy.sql.functions), 311 sqlalchemy.orm.attributes), 143 next_value() (sqlalchemy.schema.Sequence method), 366 map() (sqlalchemy.ext.sqlsoup.SqlSoup method), 232 NO_STATE (in module sqlalchemy.orm.exc), 246 map_to() (sqlalchemy.ext.sqlsoup.SqlSoup method), 233 non_added() (sqlalchemy.orm.attributes.History method), mapped_collection() (in module 143 sqlalchemy.orm.collections), 101 non_deleted() (sqlalchemy.orm.attributes.History mapped_table (sqlalchemy.orm.mapper.Mapper atmethod), 143 tribute), 65 non_primary (sqlalchemy.orm.mapper.Mapper attribute), MappedCollection (class in sqlalchemy.orm.collections), 66 101 NoReferencedColumnError, 422 Mapper (class in sqlalchemy.orm.mapper), 63 NoReferencedTableError, 422 mapper (sqlalchemy.orm.properties.RelationshipProperty NoReferenceError, 422 attribute), 254 NoResultFound, 246 mapper() (in module sqlalchemy.orm), 59 normalize_name() (sqlalchemy.engine.base.Dialect mapper_congured() (sqlalchemy.orm.events.MapperEvents method), 430 method), 176 NoSuchColumnError, 422 MapperEvents (class in sqlalchemy.orm.events), 171 NoSuchTableError, 422 MapperExtension (class in sqlalchemy.orm.interfaces), not_() (in module sqlalchemy.sql.expression), 282 240 NotSupportedError, 422 MapperProperty (class in sqlalchemy.orm.interfaces), 249 now (class in sqlalchemy.sql.functions), 311 match() (sqlalchemy.sql.expression._CompareMixin NTEXT (class in sqlalchemy.dialects.mssql), 448 method), 294 null() (in module sqlalchemy.sql.expression), 282 match() (sqlalchemy.sql.operators.ColumnOperators NullPool (class in sqlalchemy.pool), 341 method), 297 nullsrst() (in module sqlalchemy.sql.expression), 282 max (class in sqlalchemy.sql.functions), 311 nullsrst() (sqlalchemy.sql.expression._CompareMixin MEDIUMBLOB (class in sqlalchemy.dialects.mysql), method), 294 460 nullsrst() (sqlalchemy.sql.operators.ColumnOperators MEDIUMINT (class in sqlalchemy.dialects.mysql), 460 method), 297 MEDIUMTEXT (class in sqlalchemy.dialects.mysql), nullslast() (in module sqlalchemy.sql.expression), 283 460 nullslast() (sqlalchemy.sql.expression._CompareMixin merge() (sqlalchemy.orm.interfaces.MapperProperty method), 294 method), 250 nullslast() (sqlalchemy.sql.operators.ColumnOperators merge() (sqlalchemy.orm.session.Session method), 140 method), 297 merge_result() (sqlalchemy.orm.query.Query method), NullType (class in sqlalchemy.types), 403 153 NUMBER (class in sqlalchemy.dialects.oracle), 471 MetaData (class in sqlalchemy.schema), 350 NUMERIC (class in sqlalchemy.dialects.drizzle), 439 min (class in sqlalchemy.sql.functions), 311 NUMERIC (class in sqlalchemy.dialects.mysql), 461 MONEY (class in sqlalchemy.dialects.mssql), 448 NUMERIC (class in sqlalchemy.types), 390 MultipleResultsFound, 246 Numeric (class in sqlalchemy.types), 385 Mutable (class in sqlalchemy.ext.mutable), 215 NVARCHAR (class in sqlalchemy.dialects.mssql), 448 MutableBase (class in sqlalchemy.ext.mutable), 215 NVARCHAR (class in sqlalchemy.dialects.mysql), 461 MutableComposite (class in sqlalchemy.ext.mutable), NVARCHAR (class in sqlalchemy.types), 390 216 O MutableType (class in sqlalchemy.types), 402 object_mapper() (in module sqlalchemy.orm), 62 N object_session() (in module sqlalchemy.orm.session), 142 name (sqlalchemy.engine.base.Engine attribute), 330 object_session() (sqlalchemy.orm.session.Session class NCHAR (class in sqlalchemy.dialects.mssql), 448 method), 140 NCHAR (class in sqlalchemy.dialects.mysql), 461 ObjectDeletedError, 246 NCHAR (class in sqlalchemy.types), 390 ObjectDereferencedError, 246 NCLOB (class in sqlalchemy.dialects.oracle), 470 of_type() (sqlalchemy.orm.interfaces.PropComparator nested_sets (module), 237 method), 251 Index 507
of_type() (sqlalchemy.orm.properties.RelationshipProperty.Comparator PASSIVE_OFF (in module sqlalchemy.orm.attributes), method), 254 144 offset() (sqlalchemy.orm.query.Query method), 153 PASSIVE_ONLY_PERSISTENT (in module offset() (sqlalchemy.sql.expression._SelectBase method), sqlalchemy.orm.attributes), 144 308 PassiveDefault (class in sqlalchemy.schema), 365 on_connect() (sqlalchemy.engine.default.DefaultDialect pickle() (sqlalchemy.orm.events.InstanceEvents method), method), 425 178 one() (sqlalchemy.orm.query.Query method), 153 PickleType (class in sqlalchemy.types), 386 op() (sqlalchemy.sql.expression._CompareMixin polymorphic_identity (sqlalchemy.orm.mapper.Mapper method), 294 attribute), 66 op() (sqlalchemy.sql.expression.Operators method), 304 polymorphic_iterator() (sqlalchemy.orm.mapper.Mapper op() (sqlalchemy.sql.operators.ColumnOperators method), 66 method), 297 polymorphic_map (sqlalchemy.orm.mapper.Mapper atoperate() (sqlalchemy.sql.expression.Operators method), tribute), 66 304 polymorphic_on (sqlalchemy.orm.mapper.Mapper operate() (sqlalchemy.sql.operators.ColumnOperators attribute), 66 method), 297 polymorphic_union() (in module sqlalchemy.orm.util), 63 OperationalError, 423 Pool (class in sqlalchemy.pool), 339 Operators (class in sqlalchemy.sql.expression), 303 PoolEvents (class in sqlalchemy.events), 406 options() (sqlalchemy.orm.query.Query method), 153 PoolListener (class in sqlalchemy.interfaces), 419 or_() (in module sqlalchemy.sql.expression), 283 populate_existing() (sqlalchemy.orm.query.Query order_by() (sqlalchemy.orm.query.Query method), 153 method), 154 order_by() (sqlalchemy.sql.expression._SelectBase populate_instance() (sqlalchemy.orm.events.MapperEvents method), 308 method), 176 ordering_list() (in module sqlalchemy.ext.orderinglist), populate_instance() (sqlalchemy.orm.interfaces.MapperExtension 218 method), 243 original_init (sqlalchemy.orm.instrumentation.ClassManager post_congure_attribute() attribute), 247 (sqlalchemy.orm.interfaces.InstrumentationManager outerjoin() (in module sqlalchemy.orm), 159 method), 182 outerjoin() (in module sqlalchemy.sql.expression), 283 post_exec() (sqlalchemy.engine.base.ExecutionContext outerjoin() (sqlalchemy.orm.query.Query method), 154 method), 431 outerjoin() (sqlalchemy.sql.expression.FromClause post_exec() (sqlalchemy.engine.default.DefaultExecutionContext method), 301 method), 430 outparam() (in module sqlalchemy.sql.expression), 283 post_insert() (sqlalchemy.engine.default.DefaultExecutionContext over() (in module sqlalchemy.sql.expression), 283 method), 430 over() (sqlalchemy.sql.expression.FunctionElement post_instrument_class() (sqlalchemy.orm.interfaces.MapperProperty method), 299 method), 250 postfetch_cols() (sqlalchemy.engine.base.ResultProxy P method), 333 params (sqlalchemy.engine.base.Compiled attribute), 424 postgis (module), 237 (sqlalchemy.engine.base.ExecutionContext params (sqlalchemy.sql.compiler.SQLCompiler at- pre_exec() method), 432 tribute), 433 pre_exec() (sqlalchemy.engine.default.DefaultExecutionContext params() (sqlalchemy.orm.query.Query method), 154 method), 431 params() (sqlalchemy.sql.expression.ClauseElement prex_with() (sqlalchemy.sql.expression.Insert method), method), 290 301 params() (sqlalchemy.sql.expression.UpdateBase prex_with() (sqlalchemy.sql.expression.Select method), method), 309 306 PASSIVE_NO_FETCH (in module prepare() (sqlalchemy.engine.base.TwoPhaseTransaction sqlalchemy.orm.attributes), 144 method), 335 PASSIVE_NO_FETCH_RELATED (in module prepare() (sqlalchemy.orm.session.Session method), 140 sqlalchemy.orm.attributes), 144 prepare_twophase() (sqlalchemy.events.ConnectionEvents PASSIVE_NO_INITIALIZE (in module method), 408 sqlalchemy.orm.attributes), 143 prepare_twophase() (sqlalchemy.interfaces.ConnectionProxy
508
Index
method), 419 preparer (sqlalchemy.engine.default.DefaultDialect attribute), 425 primary_key (sqlalchemy.orm.mapper.Mapper attribute), 66 primary_key (sqlalchemy.sql.expression.FromClause attribute), 301 primary_key_from_instance() (sqlalchemy.orm.mapper.Mapper method), 67 primary_mapper() (sqlalchemy.orm.mapper.Mapper method), 67 PrimaryKeyConstraint (class in sqlalchemy.schema), 372 process() (sqlalchemy.engine.base.Compiled method), 424 process_bind_param() (sqlalchemy.types.TypeDecorator method), 395 process_result_value() (sqlalchemy.types.TypeDecorator method), 395 ProgrammingError, 423 PropComparator (class in sqlalchemy.orm.interfaces), 250 prune() (sqlalchemy.orm.session.Session method), 140 Python Enhancement Proposals PEP 249, 342
references() (sqlalchemy.schema.ForeignKey method), 371 reect() (sqlalchemy.schema.MetaData method), 352 reecttable() (sqlalchemy.engine.base.Connection method), 328 reecttable() (sqlalchemy.engine.base.Dialect method), 430 reecttable() (sqlalchemy.engine.base.Engine method), 331 reecttable() (sqlalchemy.engine.default.DefaultDialect method), 425 reecttable() (sqlalchemy.engine.reection.Inspector method), 360 refresh() (sqlalchemy.orm.events.InstanceEvents method), 178 refresh() (sqlalchemy.orm.session.Session method), 140 relation() (in module sqlalchemy.orm), 89 relationship() (in module sqlalchemy.orm), 84 RelationshipProperty (class in sqlalchemy.orm.properties), 251 RelationshipProperty.Comparator (class in sqlalchemy.orm.properties), 252 release_savepoint() (sqlalchemy.events.ConnectionEvents method), 408 release_savepoint() (sqlalchemy.interfaces.ConnectionProxy method), 419 Q remote_attr (sqlalchemy.ext.associationproxy.AssociationProxy attribute), 192 Query (class in sqlalchemy.orm.query), 144 remove() (sqlalchemy.orm.collections.MappedCollection query() (sqlalchemy.orm.session.Session method), 140 method), 101 query_property() (sqlalchemy.orm.scoping.ScopedSession remove() (sqlalchemy.orm.events.AttributeEvents method), 131 method), 170 QueryContext (class in sqlalchemy.orm.query), 254 remove() (sqlalchemy.orm.interfaces.AttributeExtension QueuePool (class in sqlalchemy.pool), 340 method), 245 quote_identier() (sqlalchemy.sql.compiler.IdentierPreparer remove() (sqlalchemy.orm.scoping.ScopedSession method), 432 method), 132 quote_schema() (sqlalchemy.sql.compiler.IdentierPreparer remove() (sqlalchemy.schema.MetaData method), 352 method), 432 remove_state() (sqlalchemy.orm.interfaces.InstrumentationManager method), 182 R remover() (sqlalchemy.orm.collections.collection static random (class in sqlalchemy.sql.functions), 311 method), 99 RAW (class in sqlalchemy.dialects.oracle), 472 raw_connection() (sqlalchemy.engine.base.Engine removes() (sqlalchemy.orm.collections.collection static method), 100 method), 331 removes_return() (sqlalchemy.orm.collections.collection REAL (class in sqlalchemy.dialects.drizzle), 439 static method), 100 REAL (class in sqlalchemy.dialects.mssql), 449 render_literal_value() (sqlalchemy.sql.compiler.SQLCompiler REAL (class in sqlalchemy.dialects.mysql), 462 method), 433 REAL (class in sqlalchemy.dialects.postgresql), 479 replace() (sqlalchemy.sql.expression.ColumnCollection REAL (class in sqlalchemy.types), 390 reconstruct_instance() (sqlalchemy.orm.interfaces.MapperExtension method), 292 replace_selectable() (sqlalchemy.sql.expression.FromClause method), 243 method), 301 reconstructor() (in module sqlalchemy.orm), 59 replaces() (sqlalchemy.orm.collections.collection static recreate() (sqlalchemy.pool.Pool method), 340 method), 100 references() (sqlalchemy.schema.Column method), 350 reset() (sqlalchemy.orm.state.InstanceState method), 249
Index
509
reset_isolation_level() (sqlalchemy.engine.base.Dialect 331 method), 430 S reset_isolation_level() (sqlalchemy.engine.default.DefaultDialect method), 425 SADeprecationWarning, 423 reset_joinpoint() (sqlalchemy.orm.query.Query method), SAPendingDeprecationWarning, 423 154 savepoint() (sqlalchemy.events.ConnectionEvents ResourceClosedError, 423 method), 408 result() (sqlalchemy.engine.base.ExecutionContext savepoint() (sqlalchemy.interfaces.ConnectionProxy method), 432 method), 419 result_processor() (sqlalchemy.types.TypeDecorator SAWarning, 423 method), 395 scalar (sqlalchemy.ext.associationproxy.AssociationProxy result_processor() (sqlalchemy.types.TypeEngine attribute), 193 method), 401 scalar() (sqlalchemy.engine.base.Compiled method), 424 result_processor() (sqlalchemy.types.UserDenedType scalar() (sqlalchemy.engine.base.Connectable method), method), 400 329 ResultProxy (class in sqlalchemy.engine.base), 332 scalar() (sqlalchemy.engine.base.Connection method), resurrect() (sqlalchemy.orm.events.InstanceEvents 328 method), 179 scalar() (sqlalchemy.engine.base.ResultProxy method), returning() (sqlalchemy.sql.expression.Insert method), 334 302 scalar() (sqlalchemy.orm.query.Query method), 154 returning() (sqlalchemy.sql.expression.UpdateBase scalar() (sqlalchemy.orm.session.Session method), 141 method), 309 scalar() (sqlalchemy.sql.expression.ClauseElement returns_rows (sqlalchemy.engine.base.ResultProxy atmethod), 291 tribute), 333 scalar() (sqlalchemy.sql.expression.Executable method), ReturnTypeFromArgs (class in sqlalchemy.sql.functions), 299 311 scalar() (sqlalchemy.sql.expression.FunctionElement reverse_operate() (sqlalchemy.sql.expression.Operators method), 299 method), 304 SchemaEventTarget (class in sqlalchemy.events), 411 reverse_operate() (sqlalchemy.sql.operators.ColumnOperators SchemaItem (class in sqlalchemy.schema), 352 method), 298 SchemaType (class in sqlalchemy.types), 386 rollback() (sqlalchemy.engine.base.Transaction method), scoped_session() (in module sqlalchemy.orm), 131 335 ScopedRegistry (class in sqlalchemy.util), 132 rollback() (sqlalchemy.events.ConnectionEvents ScopedSession (class in sqlalchemy.orm.scoping), 131 method), 408 Select (class in sqlalchemy.sql.expression), 305 rollback() (sqlalchemy.ext.sqlsoup.SqlSoup method), 233 select() (in module sqlalchemy.sql.expression), 284 rollback() (sqlalchemy.interfaces.ConnectionProxy select() (sqlalchemy.sql.expression.FromClause method), method), 419 301 rollback() (sqlalchemy.orm.session.Session method), 141 select() (sqlalchemy.sql.expression.FunctionElement rollback_savepoint() (sqlalchemy.events.ConnectionEvents method), 300 method), 408 select() (sqlalchemy.sql.expression.Join method), 303 rollback_savepoint() (sqlalchemy.interfaces.ConnectionProxy select_from() (sqlalchemy.orm.query.Query method), 154 method), 419 select_from() (sqlalchemy.sql.expression.Select method), rollback_twophase() (sqlalchemy.events.ConnectionEvents 306 method), 408 Selectable (class in sqlalchemy.sql.expression), 307 rollback_twophase() (sqlalchemy.interfaces.ConnectionProxy self_and_descendants (sqlalchemy.orm.mapper.Mapper method), 419 attribute), 67 rowcount (sqlalchemy.engine.base.ResultProxy at- self_group() (sqlalchemy.sql.expression.ClauseElement tribute), 333 method), 291 rowcount (sqlalchemy.engine.default.DefaultExecutionContext self_group() (sqlalchemy.sql.expression.Select method), attribute), 431 306 RowProxy (class in sqlalchemy.engine.base), 334 Sequence (class in sqlalchemy.schema), 365 run_callable() (sqlalchemy.engine.base.Connection Serializer() (in module sqlalchemy.ext.serializer), 418 method), 328 Session (class in sqlalchemy.orm.session), 133 run_callable() (sqlalchemy.engine.base.Engine method), session_user (class in sqlalchemy.sql.functions), 311 510 Index
SessionEvents (class in sqlalchemy.orm.events), 179 sorted_tables (sqlalchemy.schema.MetaData attribute), SessionExtension (class in sqlalchemy.orm.interfaces), 352 243 sql_compiler (sqlalchemy.engine.base.Compiled atsessionmaker() (in module sqlalchemy.orm.session), 133 tribute), 424 SessionTransaction (class in sqlalchemy.orm.session), SQL_VARIANT (class in sqlalchemy.dialects.mssql), 141 449 SET (class in sqlalchemy.dialects.mysql), 462 sqlalchemy.dialects.access.base (module), 444 set() (sqlalchemy.orm.collections.MappedCollection sqlalchemy.dialects.drizzle.base (module), 435 method), 101 sqlalchemy.dialects.drizzle.mysqldb (module), 440 set() (sqlalchemy.orm.events.AttributeEvents method), sqlalchemy.dialects.rebird.base (module), 440 170 sqlalchemy.dialects.rebird.kinterbasdb (module), 441 set() (sqlalchemy.orm.interfaces.AttributeExtension sqlalchemy.dialects.informix.base (module), 442 method), 245 sqlalchemy.dialects.informix.informixdb (module), 442 set() (sqlalchemy.util.ScopedRegistry method), 132 sqlalchemy.dialects.maxdb.base (module), 442 set_attribute() (in module sqlalchemy.orm.attributes), 143 sqlalchemy.dialects.mssql.adodbapi (module), 452 set_callable() (sqlalchemy.orm.state.InstanceState sqlalchemy.dialects.mssql.base (module), 444 method), 249 sqlalchemy.dialects.mssql.mxodbc (module), 451 set_committed_value() (in module sqlalchemy.dialects.mssql.pymssql (module), 451 sqlalchemy.orm.attributes), 143 sqlalchemy.dialects.mssql.pyodbc (module), 450 set_input_sizes() (sqlalchemy.engine.default.DefaultExecutionContext sqlalchemy.dialects.mssql.zxjdbc (module), 452 method), 431 sqlalchemy.dialects.mysql.base (module), 452 set_isolation_level() (sqlalchemy.engine.base.Dialect sqlalchemy.dialects.mysql.mysqlconnector (module), 467 method), 430 sqlalchemy.dialects.mysql.mysqldb (module), 465 set_shard() (sqlalchemy.ext.horizontal_shard.ShardedQuery sqlalchemy.dialects.mysql.oursql (module), 466 method), 218 sqlalchemy.dialects.mysql.pymysql (module), 466 setter() (sqlalchemy.ext.hybrid.hybrid_property method), sqlalchemy.dialects.mysql.pyodbc (module), 467 226 sqlalchemy.dialects.mysql.zxjdbc (module), 467 setup() (sqlalchemy.orm.interfaces.MapperProperty sqlalchemy.dialects.oracle.base (module), 468 method), 250 sqlalchemy.dialects.oracle.cx_oracle (module), 472 ShardedQuery (class in sqlalchemy.ext.horizontal_shard), sqlalchemy.dialects.oracle.zxjdbc (module), 474 218 sqlalchemy.dialects.postgresql.base (module), 474 ShardedSession (class in sqlalchemy.dialects.postgresql.pg8000 (module), 481 sqlalchemy.ext.horizontal_shard), 218 sqlalchemy.dialects.postgresql.psycopg2 (module), 479 sharding (module), 236 sqlalchemy.dialects.postgresql.pypostgresql (module), shares_lineage() (sqlalchemy.sql.expression.ColumnElement 481 method), 293 sqlalchemy.dialects.postgresql.zxjdbc (module), 482 should_autocommit (sqlalchemy.engine.default.DefaultExecutionContext sqlalchemy.dialects.sqlite (module), 483 attribute), 431 sqlalchemy.dialects.sqlite.base (module), 482 should_autocommit_text() sqlalchemy.dialects.sqlite.pysqlite (module), 484 (sqlalchemy.engine.base.ExecutionContext sqlalchemy.dialects.sybase.base (module), 487 method), 432 sqlalchemy.dialects.sybase.mxodbc (module), 488 should_autocommit_text() sqlalchemy.dialects.sybase.pyodbc (module), 487 (sqlalchemy.engine.default.DefaultExecutionContext sqlalchemy.dialects.sybase.pysybase (module), 487 method), 431 sqlalchemy.engine.base (module), 319 single (sqlalchemy.orm.mapper.Mapper attribute), 67 sqlalchemy.exc (module), 420 SingletonThreadPool (class in sqlalchemy.pool), 341 sqlalchemy.ext.associationproxy (module), 183 slice() (sqlalchemy.orm.query.Query method), 154 sqlalchemy.ext.compiler (module), 411 SMALLDATETIME (class in sqlalchemy.dialects.mssql), sqlalchemy.ext.declarative (module), 193 449 sqlalchemy.ext.horizontal_shard (module), 218 SMALLINT (class in sqlalchemy.dialects.mysql), 462 sqlalchemy.ext.hybrid (module), 219 SMALLINT (class in sqlalchemy.types), 390 sqlalchemy.ext.mutable (module), 210 SmallInteger (class in sqlalchemy.types), 386 sqlalchemy.ext.orderinglist (module), 216 SMALLMONEY (class in sqlalchemy.dialects.mssql), sqlalchemy.ext.serializer (module), 417 449 sqlalchemy.ext.sqlsoup (module), 226
Index
511
sqlalchemy.interfaces (module), 418 T sqlalchemy.orm (module), 38, 67, 144 Table (class in sqlalchemy.schema), 352 sqlalchemy.orm.exc (module), 246 table (sqlalchemy.orm.properties.RelationshipProperty sqlalchemy.orm.interfaces (module), 240 attribute), 254 sqlalchemy.orm.session (module), 111 table() (in module sqlalchemy.sql.expression), 285 sqlalchemy.pool (module), 335 table_names() (sqlalchemy.engine.base.Engine method), sqlalchemy.schema (module), 343 331 sqlalchemy.sql.expression (module), 276 TableClause (class in sqlalchemy.sql.expression), 308 sqlalchemy.sql.functions (module), 310 tables (sqlalchemy.orm.mapper.Mapper attribute), 67 sqlalchemy.types (module), 382 target_class (sqlalchemy.ext.associationproxy.AssociationProxy SQLAlchemyError, 423 attribute), 193 SQLCompiler (class in sqlalchemy.sql.compiler), 432 target_fullname (sqlalchemy.schema.ForeignKey atSqlSoup (class in sqlalchemy.ext.sqlsoup), 231 tribute), 371 StaleDataError, 246 TEXT (class in sqlalchemy.dialects.drizzle), 439 startswith() (sqlalchemy.sql.expression._CompareMixin TEXT (class in sqlalchemy.dialects.mssql), 449 method), 294 TEXT (class in sqlalchemy.dialects.mysql), 463 startswith() (sqlalchemy.sql.operators.ColumnOperators TEXT (class in sqlalchemy.types), 390 method), 298 Text (class in sqlalchemy.types), 388 state_getter() (sqlalchemy.orm.instrumentation.ClassManager text() (in module sqlalchemy.sql.expression), 286 method), 247 text() (sqlalchemy.engine.base.Engine method), 331 state_getter() (sqlalchemy.orm.interfaces.InstrumentationManager thread safety method), 182 Connection, 324 statement (sqlalchemy.orm.query.Query attribute), 155 MetaData, 351 statement_compiler (sqlalchemy.engine.default.DefaultDialect Session, 114 attribute), 426 sessions, 114 StatementError, 423 SessionTransaction, 141 StaticPool (class in sqlalchemy.pool), 341 Transaction, 334 String (class in sqlalchemy.types), 387 transactions, 321 subquery() (in module sqlalchemy.sql.expression), 285 ThreadLocalMetaData (class in sqlalchemy.schema), 355 subquery() (sqlalchemy.orm.query.Query method), 155 ThreadLocalRegistry (class in sqlalchemy.util), 132 subqueryload() (in module sqlalchemy.orm), 168 TIME (class in sqlalchemy.dialects.mssql), 449 subqueryload_all() (in module sqlalchemy.orm), 168 TIME (class in sqlalchemy.dialects.mysql), 463 sum (class in sqlalchemy.sql.functions), 311 TIME (class in sqlalchemy.dialects.sqlite), 484 sum() (sqlalchemy.orm.attributes.History method), 143 TIME (class in sqlalchemy.types), 390 supports_sane_multi_rowcount() Time (class in sqlalchemy.types), 388 (sqlalchemy.engine.base.ResultProxy method), TimeoutError, 423 334 TIMESTAMP (class in sqlalchemy.dialects.drizzle), 439 supports_sane_multi_rowcount() TIMESTAMP (class in sqlalchemy.dialects.mysql), 463 (sqlalchemy.engine.default.DefaultExecutionContext TIMESTAMP (class in sqlalchemy.types), 390 method), 431 timetuple (sqlalchemy.sql.operators.ColumnOperators atsupports_sane_rowcount() tribute), 298 (sqlalchemy.engine.base.ResultProxy method), TINYBLOB (class in sqlalchemy.dialects.mysql), 463 334 TINYINT (class in sqlalchemy.dialects.mssql), 449 supports_sane_rowcount() TINYINT (class in sqlalchemy.dialects.mysql), 463 (sqlalchemy.engine.default.DefaultExecutionContext TINYTEXT (class in sqlalchemy.dialects.mysql), 464 method), 431 tometadata() (sqlalchemy.schema.Table method), 355 synonym() (in module sqlalchemy.orm), 51 Transaction (class in sqlalchemy.engine.base), 334 synonym_for() (in module sqlalchemy.ext.declarative), transaction (sqlalchemy.orm.session.Session attribute), 208 141 SynonymProperty (class in transaction() (sqlalchemy.engine.base.Connection sqlalchemy.orm.descriptor_props), 254 method), 328 sysdate (class in sqlalchemy.sql.functions), 311 transaction() (sqlalchemy.engine.base.Engine method), 331
512
Index
translate_connect_args() (sqlalchemy.engine.url.URL unmodied (sqlalchemy.orm.state.InstanceState atmethod), 317 tribute), 249 translate_row() (sqlalchemy.orm.events.MapperEvents unmodied_intersection() method), 176 (sqlalchemy.orm.state.InstanceState method), translate_row() (sqlalchemy.orm.interfaces.MapperExtension 249 method), 243 unpickle() (sqlalchemy.orm.events.InstanceEvents true() (in module sqlalchemy.sql.expression), 287 method), 179 tuple_() (in module sqlalchemy.sql.expression), 287 unregister() (sqlalchemy.orm.instrumentation.ClassManager TwoPhaseTransaction (class in sqlalchemy.engine.base), method), 247 335 Update (class in sqlalchemy.sql.expression), 309 type_coerce() (in module sqlalchemy.sql.expression), 287 update() (in module sqlalchemy.sql.expression), 288 type_compiler (sqlalchemy.engine.default.DefaultDialect update() (sqlalchemy.orm.query.Query method), 155 attribute), 426 update() (sqlalchemy.sql.expression.TableClause type_descriptor() (sqlalchemy.engine.base.Dialect class method), 309 method), 430 update_execution_options() type_descriptor() (sqlalchemy.engine.default.DefaultDialect (sqlalchemy.engine.base.Engine method), method), 426 331 type_engine() (sqlalchemy.types.TypeDecorator method), UpdateBase (class in sqlalchemy.sql.expression), 309 395 URL (class in sqlalchemy.engine.url), 317 TypeDecorator (class in sqlalchemy.types), 392 user (class in sqlalchemy.sql.functions), 312 TypeEngine (class in sqlalchemy.types), 400 UserDenedType (class in sqlalchemy.types), 398 UUID (class in sqlalchemy.dialects.postgresql), 479
V UnboundExecutionError, 423 unchanged (sqlalchemy.orm.attributes.History attribute), validate_identier() (sqlalchemy.engine.default.DefaultDialect 143 method), 426 undefer() (in module sqlalchemy.orm), 45 validates() (in module sqlalchemy.orm), 49 undefer_group() (in module sqlalchemy.orm), 46 validators (sqlalchemy.orm.mapper.Mapper attribute), 67 unformat_identiers() (sqlalchemy.sql.compiler.IdentierPreparer value() (sqlalchemy.orm.query.Query method), 156 method), 432 value_as_iterable() (sqlalchemy.orm.state.InstanceState Unicode (class in sqlalchemy.types), 388 method), 249 UnicodeText (class in sqlalchemy.types), 388 values() (sqlalchemy.orm.query.Query method), 156 uninstall_descriptor() (sqlalchemy.orm.interfaces.InstrumentationManager values() (sqlalchemy.sql.expression.Insert method), 301 method), 182 values() (sqlalchemy.sql.expression.Update method), 309 uninstall_member() (sqlalchemy.orm.interfaces.InstrumentationManager values() (sqlalchemy.sql.expression.ValuesBase method), method), 182 310 union() (in module sqlalchemy.sql.expression), 288 ValuesBase (class in sqlalchemy.sql.expression), 310 union() (sqlalchemy.orm.query.Query method), 155 VARBINARY (class in sqlalchemy.dialects.mysql), 464 union() (sqlalchemy.sql.expression.Select method), 307 VARBINARY (class in sqlalchemy.types), 390 union_all() (in module sqlalchemy.sql.expression), 288 VARCHAR (class in sqlalchemy.dialects.drizzle), 440 union_all() (sqlalchemy.orm.query.Query method), 155 VARCHAR (class in sqlalchemy.dialects.mssql), 449 union_all() (sqlalchemy.sql.expression.Select method), VARCHAR (class in sqlalchemy.dialects.mysql), 464 307 VARCHAR (class in sqlalchemy.types), 390 unique_params() (sqlalchemy.sql.expression.ClauseElementVariant (class in sqlalchemy.types), 403 method), 291 versioning (module), 238 UniqueConstraint (class in sqlalchemy.schema), 372 vertical (module), 239 UNIQUEIDENTIFIER (class in W sqlalchemy.dialects.mssql), 449 unloaded (sqlalchemy.orm.state.InstanceState attribute), where() (sqlalchemy.sql.expression.Delete method), 298 249 where() (sqlalchemy.sql.expression.Select method), 307 UnmappedClassError, 246 where() (sqlalchemy.sql.expression.Update method), 309 UnmappedColumnError, 247 whereclause (sqlalchemy.orm.query.Query attribute), 156 UnmappedError, 247 with_entities() (sqlalchemy.orm.query.Query method), UnmappedInstanceError, 247 156
Index 513
with_hint() (sqlalchemy.orm.query.Query method), 157 with_hint() (sqlalchemy.sql.expression.Select method), 307 with_labels() (sqlalchemy.ext.sqlsoup.SqlSoup method), 233 with_labels() (sqlalchemy.orm.query.Query method), 157 with_lockmode() (sqlalchemy.orm.query.Query method), 157 with_only_columns() (sqlalchemy.sql.expression.Select method), 307 with_parent() (in module sqlalchemy.orm), 159 with_parent() (sqlalchemy.orm.query.Query method), 157 with_polymorphic() (sqlalchemy.orm.query.Query method), 157 with_session() (sqlalchemy.orm.query.Query method), 158 with_variant() (sqlalchemy.types.TypeDecorator method), 395 with_variant() (sqlalchemy.types.TypeEngine method), 402 with_variant() (sqlalchemy.types.UserDenedType method), 400 with_variant() (sqlalchemy.types.Variant method), 403
Y
YEAR (class in sqlalchemy.dialects.mysql), 465 yield_per() (sqlalchemy.orm.query.Query method), 158
514
Index