Microsoft SQL Server

Support for the Microsoft SQL Server database.

The following table summarizes current support levels for database release versions.

DBAPI Support

The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.

Auto Increment Behavior

SQL Server provides so-called “auto incrementing” behavior using the IDENTITY construct, which can be placed on an integer primary key. SQLAlchemy considers IDENTITY within its default “autoincrement” behavior, described at Column.autoincrement; this means that by default, the first integer primary key column in a Table will be considered to be the identity column and will generate DDL as such:

from sqlalchemy import Table, MetaData, Column, Integer

m = MetaData()
t = Table('t', m,
        Column('id', Integer, primary_key=True),
        Column('x', Integer))
m.create_all(engine)

The above example will generate DDL as:

CREATE TABLE t (
    id INTEGER NOT NULL IDENTITY(1,1),
    x INTEGER NULL,
    PRIMARY KEY (id)
)

For the case where this default generation of IDENTITY is not desired, specify autoincrement=False on all integer primary key columns:

m = MetaData()
t = Table('t', m,
        Column('id', Integer, primary_key=True, autoincrement=False),
        Column('x', Integer))
m.create_all(engine)

Note

An INSERT statement which refers to an explicit value for such a column is prohibited by SQL Server, however SQLAlchemy will detect this and modify the IDENTITY_INSERT flag accordingly at statement execution time. As this is not a high performing process, care should be taken to set the autoincrement flag appropriately for columns that will not actually require IDENTITY behavior.

Controlling “Start” and “Increment”

Specific control over the parameters of the IDENTITY value is supported using the Sequence object. While this object normally represents an explicit “sequence” for supporting backends, on SQL Server it is re-purposed to specify behavior regarding the identity column, including support of the “start” and “increment” values:

from sqlalchemy import Table, Integer, Sequence, Column

Table('test', metadata,
       Column('id', Integer,
              Sequence('blah', start=100, increment=10),
              primary_key=True),
       Column('name', String(20))
     ).create(some_engine)

would yield:

CREATE TABLE test (
  id INTEGER NOT NULL IDENTITY(100,10) PRIMARY KEY,
  name VARCHAR(20) NULL,
  )

Note that the start and increment values for sequences are optional and will default to 1,1.

INSERT behavior

Handling of the IDENTITY column at INSERT time involves two key techniques. The most common is being able to fetch the “last inserted value” for a given IDENTITY column, a process which SQLAlchemy performs implicitly in many cases, most importantly within the ORM.

The process for fetching this value has several variants:

  • In the vast majority of cases, RETURNING is used in conjunction with INSERT statements on SQL Server in order to get newly generated primary key values:

    INSERT INTO t (x) OUTPUT inserted.id VALUES (?)
  • When RETURNING is not available or has been disabled via implicit_returning=False, either the scope_identity() function or the @@identity variable is used; behavior varies by backend:

    • when using PyODBC, the phrase ; select scope_identity() will be appended to the end of the INSERT statement; a second result set will be fetched in order to receive the value. Given a table as:

      t = Table('t', m, Column('id', Integer, primary_key=True),
              Column('x', Integer),
              implicit_returning=False)

      an INSERT will look like:

      INSERT INTO t (x) VALUES (?); select scope_identity()
    • Other dialects such as pymssql will call upon SELECT scope_identity() AS lastrowid subsequent to an INSERT statement. If the flag use_scope_identity=False is passed to create_engine(), the statement SELECT @@identity AS lastrowid is used instead.

A table that contains an IDENTITY column will prohibit an INSERT statement that refers to the identity column explicitly. The SQLAlchemy dialect will detect when an INSERT construct, created using a core insert() construct (not a plain string SQL), refers to the identity column, and in this case will emit SET IDENTITY_INSERT ON prior to the insert statement proceeding, and SET IDENTITY_INSERT OFF subsequent to the execution. Given this example:

m = MetaData()
t = Table('t', m, Column('id', Integer, primary_key=True),
                Column('x', Integer))
m.create_all(engine)

engine.execute(t.insert(), {'id': 1, 'x':1}, {'id':2, 'x':2})

The above column will be created with IDENTITY, however the INSERT statement we emit is specifying explicit values. In the echo output we can see how SQLAlchemy handles this:

CREATE TABLE t (
    id INTEGER NOT NULL IDENTITY(1,1),
    x INTEGER NULL,
    PRIMARY KEY (id)
)

COMMIT
SET IDENTITY_INSERT t ON
INSERT INTO t (id, x) VALUES (?, ?)
((1, 1), (2, 2))
SET IDENTITY_INSERT t OFF
COMMIT

This is an auxiliary use case suitable for testing and bulk insert scenarios.

MAX on VARCHAR / NVARCHAR

SQL Server supports the special string “MAX” within the VARCHAR and NVARCHAR datatypes, to indicate “maximum length possible”. The dialect currently handles this as a length of “None” in the base type, rather than supplying a dialect-specific version of these types, so that a base type specified such as VARCHAR(None) can assume “unlengthed” behavior on more than one backend without using dialect-specific types.

To build a SQL Server VARCHAR or NVARCHAR with MAX length, use None:

my_table = Table(
    'my_table', metadata,
    Column('my_data', VARCHAR(None)),
    Column('my_n_data', NVARCHAR(None))
)

Collation Support

Character collations are supported by the base string types, specified by the string argument “collation”:

from sqlalchemy import VARCHAR
Column('login', VARCHAR(32, collation='Latin1_General_CI_AS'))

When such a column is associated with a Table, the CREATE TABLE statement for this column will yield:

login VARCHAR(32) COLLATE Latin1_General_CI_AS NULL

LIMIT/OFFSET Support

MSSQL has no support for the LIMIT or OFFSET keywords. LIMIT is supported directly through the TOP Transact SQL keyword:

select.limit

will yield:

SELECT TOP n

If using SQL Server 2005 or above, LIMIT with OFFSET support is available through the ROW_NUMBER OVER construct. For versions below 2005, LIMIT with OFFSET usage will fail.

Transaction Isolation Level

All SQL Server dialects support setting of transaction isolation level both via a dialect-specific parameter create_engine.isolation_level accepted by create_engine(), as well as the Connection.execution_options.isolation_level argument as passed to Connection.execution_options(). This feature works by issuing the command SET TRANSACTION ISOLATION LEVEL <level> for each new connection.

To set isolation level using create_engine():

engine = create_engine(
    "mssql+pyodbc://scott:tiger@ms_2008",
    isolation_level="REPEATABLE READ"
)

To set using per-connection execution options:

connection = engine.connect()
connection = connection.execution_options(
    isolation_level="READ COMMITTED"
)

Valid values for isolation_level include:

  • AUTOCOMMIT - pyodbc / pymssql-specific

  • READ COMMITTED

  • READ UNCOMMITTED

  • REPEATABLE READ

  • SERIALIZABLE

  • SNAPSHOT - specific to SQL Server

New in version 1.1: support for isolation level setting on Microsoft SQL Server.

New in version 1.2: added AUTOCOMMIT isolation level setting

Nullability

MSSQL has support for three levels of column nullability. The default nullability allows nulls and is explicit in the CREATE TABLE construct:

name VARCHAR(20) NULL

If nullable=None is specified then no specification is made. In other words the database’s configured default is used. This will render:

name VARCHAR(20)

If nullable is True or False then the column will be NULL or NOT NULL respectively.

Date / Time Handling

DATE and TIME are supported. Bind parameters are converted to datetime.datetime() objects as required by most MSSQL drivers, and results are processed from strings if needed. The DATE and TIME types are not available for MSSQL 2005 and previous - if a server version below 2008 is detected, DDL for these types will be issued as DATETIME.

Large Text/Binary Type Deprecation

Per SQL Server 2012/2014 Documentation, the NTEXT, TEXT and IMAGE datatypes are to be removed from SQL Server in a future release. SQLAlchemy normally relates these types to the UnicodeText, Text and LargeBinary datatypes.

In order to accommodate this change, a new flag deprecate_large_types is added to the dialect, which will be automatically set based on detection of the server version in use, if not otherwise set by the user. The behavior of this flag is as follows:

  • When this flag is True, the UnicodeText, Text and LargeBinary datatypes, when used to render DDL, will render the types NVARCHAR(max), VARCHAR(max), and VARBINARY(max), respectively. This is a new behavior as of the addition of this flag.

  • When this flag is False, the UnicodeText, Text and LargeBinary datatypes, when used to render DDL, will render the types NTEXT, TEXT, and IMAGE, respectively. This is the long-standing behavior of these types.

  • The flag begins with the value None, before a database connection is established. If the dialect is used to render DDL without the flag being set, it is interpreted the same as False.

  • On first connection, the dialect detects if SQL Server version 2012 or greater is in use; if the flag is still at None, it sets it to True or False based on whether 2012 or greater is detected.

  • The flag can be set to either True or False when the dialect is created, typically via create_engine():

    eng = create_engine("mssql+pymssql://user:pass@host/db",
                    deprecate_large_types=True)
  • Complete control over whether the “old” or “new” types are rendered is available in all SQLAlchemy versions by using the UPPERCASE type objects instead: NVARCHAR, VARCHAR, VARBINARY, TEXT, NTEXT, IMAGE will always remain fixed and always output exactly that type.

New in version 1.0.0.

Multipart Schema Names

SQL Server schemas sometimes require multiple parts to their “schema” qualifier, that is, including the database name and owner name as separate tokens, such as mydatabase.dbo.some_table. These multipart names can be set at once using the Table.schema argument of Table:

Table(
    "some_table", metadata,
    Column("q", String(50)),
    schema="mydatabase.dbo"
)

When performing operations such as table or component reflection, a schema argument that contains a dot will be split into separate “database” and “owner” components in order to correctly query the SQL Server information schema tables, as these two values are stored separately. Additionally, when rendering the schema name for DDL or SQL, the two components will be quoted separately for case sensitive names and other special characters. Given an argument as below:

Table(
    "some_table", metadata,
    Column("q", String(50)),
    schema="MyDataBase.dbo"
)

The above schema would be rendered as [MyDataBase].dbo, and also in reflection, would be reflected using “dbo” as the owner and “MyDataBase” as the database name.

To control how the schema name is broken into database / owner, specify brackets (which in SQL Server are quoting characters) in the name. Below, the “owner” will be considered as MyDataBase.dbo and the “database” will be None:

Table(
    "some_table", metadata,
    Column("q", String(50)),
    schema="[MyDataBase.dbo]"
)

To individually specify both database and owner name with special characters or embedded dots, use two sets of brackets:

Table(
    "some_table", metadata,
    Column("q", String(50)),
    schema="[MyDataBase.Period].[MyOwner.Dot]"
)

Changed in version 1.2: the SQL Server dialect now treats brackets as identifier delimeters splitting the schema into separate database and owner tokens, to allow dots within either name itself.

Legacy Schema Mode

Very old versions of the MSSQL dialect introduced the behavior such that a schema-qualified table would be auto-aliased when used in a SELECT statement; given a table:

account_table = Table(
    'account', metadata,
    Column('id', Integer, primary_key=True),
    Column('info', String(100)),
    schema="customer_schema"
)

this legacy mode of rendering would assume that “customer_schema.account” would not be accepted by all parts of the SQL statement, as illustrated below:

>>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=True)
>>> print(account_table.select().compile(eng))
SELECT account_1.id, account_1.info
FROM customer_schema.account AS account_1

This mode of behavior is now off by default, as it appears to have served no purpose; however in the case that legacy applications rely upon it, it is available using the legacy_schema_aliasing argument to create_engine() as illustrated above.

Changed in version 1.1: the legacy_schema_aliasing flag introduced in version 1.0.5 to allow disabling of legacy mode for schemas now defaults to False.

Clustered Index Support

The MSSQL dialect supports clustered indexes (and primary keys) via the mssql_clustered option. This option is available to Index, UniqueConstraint. and PrimaryKeyConstraint.

To generate a clustered index:

Index("my_index", table.c.x, mssql_clustered=True)

which renders the index as CREATE CLUSTERED INDEX my_index ON table (x).

To generate a clustered primary key use:

Table('my_table', metadata,
      Column('x', ...),
      Column('y', ...),
      PrimaryKeyConstraint("x", "y", mssql_clustered=True))

which will render the table, for example, as:

CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
                       PRIMARY KEY CLUSTERED (x, y))

Similarly, we can generate a clustered unique constraint using:

Table('my_table', metadata,
      Column('x', ...),
      Column('y', ...),
      PrimaryKeyConstraint("x"),
      UniqueConstraint("y", mssql_clustered=True),
      )

To explicitly request a non-clustered primary key (for example, when a separate clustered index is desired), use:

Table('my_table', metadata,
      Column('x', ...),
      Column('y', ...),
      PrimaryKeyConstraint("x", "y", mssql_clustered=False))

which will render the table, for example, as:

CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
                       PRIMARY KEY NONCLUSTERED (x, y))

Changed in version 1.1: the mssql_clustered option now defaults to None, rather than False. mssql_clustered=False now explicitly renders the NONCLUSTERED clause, whereas None omits the CLUSTERED clause entirely, allowing SQL Server defaults to take effect.

MSSQL-Specific Index Options

In addition to clustering, the MSSQL dialect supports other special options for Index.

INCLUDE

The mssql_include option renders INCLUDE(colname) for the given string names:

Index("my_index", table.c.x, mssql_include=['y'])

would render the index as CREATE INDEX my_index ON table (x) INCLUDE (y)

Index ordering

Index ordering is available via functional expressions, such as:

Index("my_index", table.c.x.desc())

would render the index as CREATE INDEX my_index ON table (x DESC)

Compatibility Levels

MSSQL supports the notion of setting compatibility levels at the database level. This allows, for instance, to run a database that is compatible with SQL2000 while running on a SQL2005 database server. server_version_info will always return the database server version information (in this case SQL2005) and not the compatibility level information. Because of this, if running under a backwards compatibility mode SQLAlchemy may attempt to use T-SQL statements that are unable to be parsed by the database server.

Triggers

SQLAlchemy by default uses OUTPUT INSERTED to get at newly generated primary key values via IDENTITY columns or other server side defaults. MS-SQL does not allow the usage of OUTPUT INSERTED on tables that have triggers. To disable the usage of OUTPUT INSERTED on a per-table basis, specify implicit_returning=False for each Table which has triggers:

Table('mytable', metadata,
    Column('id', Integer, primary_key=True),
    # ...,
    implicit_returning=False
)

Declarative form:

class MyClass(Base):
    # ...
    __table_args__ = {'implicit_returning':False}

This option can also be specified engine-wide using the implicit_returning=False argument on create_engine().

Rowcount Support / ORM Versioning

The SQL Server drivers may have limited ability to return the number of rows updated from an UPDATE or DELETE statement.

As of this writing, the PyODBC driver is not able to return a rowcount when OUTPUT INSERTED is used. This impacts the SQLAlchemy ORM’s versioning feature in many cases where server-side value generators are in use in that while the versioning operations can succeed, the ORM cannot always check that an UPDATE or DELETE statement matched the number of rows expected, which is how it verifies that the version identifier matched. When this condition occurs, a warning will be emitted but the operation will proceed.

The use of OUTPUT INSERTED can be disabled by setting the Table.implicit_returning flag to False on a particular Table, which in declarative looks like:

class MyTable(Base):
    __tablename__ = 'mytable'
    id = Column(Integer, primary_key=True)
    stuff = Column(String(10))
    timestamp = Column(TIMESTAMP(), default=text('DEFAULT'))
    __mapper_args__ = {
        'version_id_col': timestamp,
        'version_id_generator': False,
    }
    __table_args__ = {
        'implicit_returning': False
    }

Enabling Snapshot Isolation

SQL Server has a default transaction isolation mode that locks entire tables, and causes even mildly concurrent applications to have long held locks and frequent deadlocks. Enabling snapshot isolation for the database as a whole is recommended for modern levels of concurrency support. This is accomplished via the following ALTER DATABASE commands executed at the SQL prompt:

ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON

ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON

Background on SQL Server snapshot isolation is available at http://msdn.microsoft.com/en-us/library/ms175095.aspx.

SQL Server Data Types

As with all SQLAlchemy dialects, all UPPERCASE types that are known to be valid with SQL server are importable from the top level dialect, whether they originate from sqlalchemy.types or from the local dialect:

from sqlalchemy.dialects.mssql import \
    BIGINT, BINARY, BIT, CHAR, DATE, DATETIME, DATETIME2, \
    DATETIMEOFFSET, DECIMAL, FLOAT, IMAGE, INTEGER, MONEY, \
    NCHAR, NTEXT, NUMERIC, NVARCHAR, REAL, SMALLDATETIME, \
    SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \
    TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR

Types which are specific to SQL Server, or have SQL Server-specific construction arguments, are as follows:

Object Name Description

BIT

CHAR

The SQL CHAR type.

DATETIME2

DATETIMEOFFSET

IMAGE

MONEY

NCHAR

The SQL NCHAR type.

NTEXT

MSSQL NTEXT type, for variable-length unicode text up to 2^30 characters.

NVARCHAR

The SQL NVARCHAR type.

REAL

ROWVERSION

Implement the SQL Server ROWVERSION type.

SMALLDATETIME

SMALLMONEY

SQL_VARIANT

TEXT

The SQL TEXT type.

TIME

TIMESTAMP

Implement the SQL Server TIMESTAMP type.

TINYINT

UNIQUEIDENTIFIER

VARCHAR

The SQL VARCHAR type.

XML

MSSQL XML type.

class sqlalchemy.dialects.mssql.BIT
class sqlalchemy.dialects.mssql.CHAR(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

The SQL CHAR type.

Members

__init__()

method sqlalchemy.dialects.mssql.CHAR.__init__(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

inherited from the sqlalchemy.types.String.__init__ method of String

Create a string-holding type.

Parameters:
  • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.

  • collation

    Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:

    >>> from sqlalchemy import cast, select, String
    >>> print select([cast('some string', String(collation='utf8'))])
    SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

  • convert_unicode

    When set to True, the String type will assume that input is to be passed as Python Unicode objects under Python 2, and results returned as Python Unicode objects. In the rare circumstance that the DBAPI does not support Python unicode under Python 2, SQLAlchemy will use its own encoder/decoder functionality on strings, referring to the value of the create_engine.encoding parameter parameter passed to create_engine() as the encoding.

    For the extremely rare case that Python Unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python Unicode, the string value "force" can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.

    Note

    SQLAlchemy’s unicode-conversion flags and features only apply to Python 2; in Python 3, all string objects are Unicode objects. For this reason, as well as the fact that virtually all modern DBAPIs now support Unicode natively even under Python 2, the String.convert_unicode flag is inherently a legacy feature.

    Note

    In the vast majority of cases, the Unicode or UnicodeText datatypes should be used for a Column that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database side as well as set up the correct Unicode behaviors under Python 2.

    See also

    create_engine.convert_unicode - Engine-wide parameter

  • unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard library’s string.decode() functions. This flag requires that String.convert_unicode is set to "force" - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.

class sqlalchemy.dialects.mssql.DATETIME2(precision=None, **kw)

Class signature

class sqlalchemy.dialects.mssql.DATETIME2 (sqlalchemy.dialects.mssql.base._DateTimeBase, sqlalchemy.types.DateTime)

class sqlalchemy.dialects.mssql.DATETIMEOFFSET(precision=None, **kwargs)
class sqlalchemy.dialects.mssql.IMAGE(length=None)

Members

__init__()

method sqlalchemy.dialects.mssql.IMAGE.__init__(length=None)

inherited from the sqlalchemy.types.LargeBinary.__init__ method of LargeBinary

Construct a LargeBinary type.

Parameters:

length – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.

class sqlalchemy.dialects.mssql.MONEY
class sqlalchemy.dialects.mssql.NCHAR(length=None, **kwargs)

The SQL NCHAR type.

Members

__init__()

method sqlalchemy.dialects.mssql.NCHAR.__init__(length=None, **kwargs)

inherited from the sqlalchemy.types.Unicode.__init__ method of Unicode

Create a Unicode object.

Parameters are the same as that of String, with the exception that convert_unicode defaults to True.

class sqlalchemy.dialects.mssql.NTEXT(length=None, **kwargs)

MSSQL NTEXT type, for variable-length unicode text up to 2^30 characters.

Members

__init__()

method sqlalchemy.dialects.mssql.NTEXT.__init__(length=None, **kwargs)

inherited from the sqlalchemy.types.UnicodeText.__init__ method of UnicodeText

Create a Unicode-converting Text type.

Parameters are the same as that of Text, with the exception that convert_unicode defaults to True.

class sqlalchemy.dialects.mssql.NVARCHAR(length=None, **kwargs)

The SQL NVARCHAR type.

Members

__init__()

method sqlalchemy.dialects.mssql.NVARCHAR.__init__(length=None, **kwargs)

inherited from the sqlalchemy.types.Unicode.__init__ method of Unicode

Create a Unicode object.

Parameters are the same as that of String, with the exception that convert_unicode defaults to True.

class sqlalchemy.dialects.mssql.REAL(**kw)
class sqlalchemy.dialects.mssql.ROWVERSION(convert_int=False)

Implement the SQL Server ROWVERSION type.

The ROWVERSION datatype is a SQL Server synonym for the TIMESTAMP datatype, however current SQL Server documentation suggests using ROWVERSION for new datatypes going forward.

The ROWVERSION datatype does not reflect (e.g. introspect) from the database as itself; the returned datatype will be TIMESTAMP.

This is a read-only datatype that does not support INSERT of values.

New in version 1.2.

See also

TIMESTAMP

Members

__init__()

method sqlalchemy.dialects.mssql.ROWVERSION.__init__(convert_int=False)

inherited from the sqlalchemy.dialects.mssql.base.TIMESTAMP.__init__ method of TIMESTAMP

Construct a TIMESTAMP or ROWVERSION type.

Parameters:

convert_int – if True, binary integer values will be converted to integers on read.

New in version 1.2.

class sqlalchemy.dialects.mssql.SMALLDATETIME(timezone=False)

Members

__init__()

Class signature

class sqlalchemy.dialects.mssql.SMALLDATETIME (sqlalchemy.dialects.mssql.base._DateTimeBase, sqlalchemy.types.DateTime)

method sqlalchemy.dialects.mssql.SMALLDATETIME.__init__(timezone=False)

inherited from the sqlalchemy.types.DateTime.__init__ method of DateTime

Construct a new DateTime.

Parameters:

timezone – boolean. Indicates that the datetime type should enable timezone support, if available on the base date/time-holding type only. It is recommended to make use of the TIMESTAMP datatype directly when using this flag, as some databases include separate generic date/time-holding types distinct from the timezone-capable TIMESTAMP datatype, such as Oracle.

class sqlalchemy.dialects.mssql.SMALLMONEY
class sqlalchemy.dialects.mssql.SQL_VARIANT
class sqlalchemy.dialects.mssql.TEXT(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

The SQL TEXT type.

Members

__init__()

method sqlalchemy.dialects.mssql.TEXT.__init__(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

inherited from the sqlalchemy.types.String.__init__ method of String

Create a string-holding type.

Parameters:
  • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.

  • collation

    Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:

    >>> from sqlalchemy import cast, select, String
    >>> print select([cast('some string', String(collation='utf8'))])
    SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

  • convert_unicode

    When set to True, the String type will assume that input is to be passed as Python Unicode objects under Python 2, and results returned as Python Unicode objects. In the rare circumstance that the DBAPI does not support Python unicode under Python 2, SQLAlchemy will use its own encoder/decoder functionality on strings, referring to the value of the create_engine.encoding parameter parameter passed to create_engine() as the encoding.

    For the extremely rare case that Python Unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python Unicode, the string value "force" can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.

    Note

    SQLAlchemy’s unicode-conversion flags and features only apply to Python 2; in Python 3, all string objects are Unicode objects. For this reason, as well as the fact that virtually all modern DBAPIs now support Unicode natively even under Python 2, the String.convert_unicode flag is inherently a legacy feature.

    Note

    In the vast majority of cases, the Unicode or UnicodeText datatypes should be used for a Column that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database side as well as set up the correct Unicode behaviors under Python 2.

    See also

    create_engine.convert_unicode - Engine-wide parameter

  • unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard library’s string.decode() functions. This flag requires that String.convert_unicode is set to "force" - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.

class sqlalchemy.dialects.mssql.TIME(precision=None, **kwargs)
class sqlalchemy.dialects.mssql.TIMESTAMP(convert_int=False)

Implement the SQL Server TIMESTAMP type.

Note this is completely different than the SQL Standard TIMESTAMP type, which is not supported by SQL Server. It is a read-only datatype that does not support INSERT of values.

New in version 1.2.

See also

ROWVERSION

Members

__init__()

Class signature

class sqlalchemy.dialects.mssql.TIMESTAMP (sqlalchemy.types._Binary)

method sqlalchemy.dialects.mssql.TIMESTAMP.__init__(convert_int=False)

Construct a TIMESTAMP or ROWVERSION type.

Parameters:

convert_int – if True, binary integer values will be converted to integers on read.

New in version 1.2.

class sqlalchemy.dialects.mssql.TINYINT
class sqlalchemy.dialects.mssql.UNIQUEIDENTIFIER
class sqlalchemy.dialects.mssql.VARCHAR(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

The SQL VARCHAR type.

Members

__init__()

method sqlalchemy.dialects.mssql.VARCHAR.__init__(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

inherited from the sqlalchemy.types.String.__init__ method of String

Create a string-holding type.

Parameters:
  • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.

  • collation

    Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:

    >>> from sqlalchemy import cast, select, String
    >>> print select([cast('some string', String(collation='utf8'))])
    SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

  • convert_unicode

    When set to True, the String type will assume that input is to be passed as Python Unicode objects under Python 2, and results returned as Python Unicode objects. In the rare circumstance that the DBAPI does not support Python unicode under Python 2, SQLAlchemy will use its own encoder/decoder functionality on strings, referring to the value of the create_engine.encoding parameter parameter passed to create_engine() as the encoding.

    For the extremely rare case that Python Unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python Unicode, the string value "force" can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.

    Note

    SQLAlchemy’s unicode-conversion flags and features only apply to Python 2; in Python 3, all string objects are Unicode objects. For this reason, as well as the fact that virtually all modern DBAPIs now support Unicode natively even under Python 2, the String.convert_unicode flag is inherently a legacy feature.

    Note

    In the vast majority of cases, the Unicode or UnicodeText datatypes should be used for a Column that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database side as well as set up the correct Unicode behaviors under Python 2.

    See also

    create_engine.convert_unicode - Engine-wide parameter

  • unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard library’s string.decode() functions. This flag requires that String.convert_unicode is set to "force" - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.

class sqlalchemy.dialects.mssql.XML(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

MSSQL XML type.

This is a placeholder type for reflection purposes that does not include any Python-side datatype support. It also does not currently support additional arguments, such as “CONTENT”, “DOCUMENT”, “xml_schema_collection”.

New in version 1.1.11.

Members

__init__()

method sqlalchemy.dialects.mssql.XML.__init__(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)

inherited from the sqlalchemy.types.String.__init__ method of String

Create a string-holding type.

Parameters:
  • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.

  • collation

    Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:

    >>> from sqlalchemy import cast, select, String
    >>> print select([cast('some string', String(collation='utf8'))])
    SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

  • convert_unicode

    When set to True, the String type will assume that input is to be passed as Python Unicode objects under Python 2, and results returned as Python Unicode objects. In the rare circumstance that the DBAPI does not support Python unicode under Python 2, SQLAlchemy will use its own encoder/decoder functionality on strings, referring to the value of the create_engine.encoding parameter parameter passed to create_engine() as the encoding.

    For the extremely rare case that Python Unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python Unicode, the string value "force" can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.

    Note

    SQLAlchemy’s unicode-conversion flags and features only apply to Python 2; in Python 3, all string objects are Unicode objects. For this reason, as well as the fact that virtually all modern DBAPIs now support Unicode natively even under Python 2, the String.convert_unicode flag is inherently a legacy feature.

    Note

    In the vast majority of cases, the Unicode or UnicodeText datatypes should be used for a Column that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database side as well as set up the correct Unicode behaviors under Python 2.

    See also

    create_engine.convert_unicode - Engine-wide parameter

  • unicode_error – Optional, a method to use to handle Unicode conversion errors. Behaves like the errors keyword argument to the standard library’s string.decode() functions. This flag requires that String.convert_unicode is set to "force" - otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.

PyODBC

Support for the Microsoft SQL Server database via the PyODBC driver.

DBAPI

Documentation and download information (if applicable) for PyODBC is available at: http://pypi.python.org/pypi/pyodbc/

Connecting

Connect String:

mssql+pyodbc://<username>:<password>@<dsnname>

Connecting to PyODBC

The URL here is to be translated to PyODBC connection strings, as detailed in ConnectionStrings.

DSN Connections

A DSN-based connection is preferred overall when using ODBC. A basic DSN-based connection looks like:

engine = create_engine("mssql+pyodbc://scott:tiger@some_dsn")

Which above, will pass the following connection string to PyODBC:

dsn=mydsn;UID=user;PWD=pass

If the username and password are omitted, the DSN form will also add the Trusted_Connection=yes directive to the ODBC string.

Hostname Connections

Hostname-based connections are not preferred, however are supported. The ODBC driver name must be explicitly specified:

engine = create_engine("mssql+pyodbc://scott:tiger@myhost:port/databasename?driver=SQL+Server+Native+Client+10.0")

Changed in version 1.0.0: Hostname-based PyODBC connections now require the SQL Server driver name specified explicitly. SQLAlchemy cannot choose an optimal default here as it varies based on platform and installed drivers.

Other keywords interpreted by the Pyodbc dialect to be passed to pyodbc.connect() in both the DSN and hostname cases include: odbc_autotranslate, ansi, unicode_results, autocommit.

Pass through exact Pyodbc string

A PyODBC connection string can also be sent exactly as specified in ConnectionStrings into the driver using the parameter odbc_connect. The delimeters must be URL escaped, however, as illustrated below using urllib.quote_plus:

import urllib
params = urllib.quote_plus("DRIVER={SQL Server Native Client 10.0};SERVER=dagger;DATABASE=test;UID=user;PWD=password")

engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)

Driver / Unicode Support

PyODBC works best with Microsoft ODBC drivers, particularly in the area of Unicode support on both Python 2 and Python 3.

Using the FreeTDS ODBC drivers on Linux or OSX with PyODBC is not recommended; there have been historically many Unicode-related issues in this area, including before Microsoft offered ODBC drivers for Linux and OSX. Now that Microsoft offers drivers for all platforms, for PyODBC support these are recommended. FreeTDS remains relevant for non-ODBC drivers such as pymssql where it works very well.

Rowcount Support

Pyodbc only has partial support for rowcount. See the notes at Rowcount Support / ORM Versioning for important notes when using ORM versioning.

mxODBC

Support for the Microsoft SQL Server database via the mxODBC driver.

DBAPI

Documentation and download information (if applicable) for mxODBC is available at: http://www.egenix.com/

Connecting

Connect String:

mssql+mxodbc://<username>:<password>@<dsnname>

Execution Modes

mxODBC features two styles of statement execution, using the cursor.execute() and cursor.executedirect() methods (the second being an extension to the DBAPI specification). The former makes use of a particular API call specific to the SQL Server Native Client ODBC driver known SQLDescribeParam, while the latter does not.

mxODBC apparently only makes repeated use of a single prepared statement when SQLDescribeParam is used. The advantage to prepared statement reuse is one of performance. The disadvantage is that SQLDescribeParam has a limited set of scenarios in which bind parameters are understood, including that they cannot be placed within the argument lists of function calls, anywhere outside the FROM, or even within subqueries within the FROM clause - making the usage of bind parameters within SELECT statements impossible for all but the most simplistic statements.

For this reason, the mxODBC dialect uses the “native” mode by default only for INSERT, UPDATE, and DELETE statements, and uses the escaped string mode for all other statements.

This behavior can be controlled via Executable.execution_options() using the native_odbc_execute flag with a value of True or False, where a value of True will unconditionally use native bind parameters and a value of False will unconditionally use string-escaped parameters.

pymssql

Support for the Microsoft SQL Server database via the pymssql driver.

DBAPI

Documentation and download information (if applicable) for pymssql is available at: http://pymssql.org/

Connecting

Connect String:

mssql+pymssql://<username>:<password>@<freetds_name>/?charset=utf8

pymssql is a Python module that provides a Python DBAPI interface around FreeTDS. Compatible builds are available for Linux, MacOSX and Windows platforms.

Modern versions of this driver work very well with SQL Server and FreeTDS from Linux and is highly recommended.

zxjdbc

Support for the Microsoft SQL Server database via the zxJDBC for Jython driver.

Note

Jython is not supported by current versions of SQLAlchemy. The zxjdbc dialect should be considered as experimental.

DBAPI

Drivers for this database are available at: http://jtds.sourceforge.net/

Connecting

Connect String:

mssql+zxjdbc://user:pass@host:port/dbname[?key=value&key=value...]

AdoDBAPI

Support for the Microsoft SQL Server database via the adodbapi driver.

DBAPI

Documentation and download information (if applicable) for adodbapi is available at: http://adodbapi.sourceforge.net/

Connecting

Connect String:

mssql+adodbapi://<username>:<password>@<dsnname>

Note

The adodbapi dialect is not implemented SQLAlchemy versions 0.6 and above at this time.