Working with Engines and Connections¶
This section details direct usage of the Engine
,
Connection
, and related objects. Its important to note that when
using the SQLAlchemy ORM, these objects are not generally accessed; instead,
the Session
object is used as the interface to the database.
However, for applications that are built around direct usage of textual SQL
statements and/or SQL expression constructs without involvement by the ORM’s
higher level management services, the Engine
and
Connection
are king (and queen?) - read on.
Basic Usage¶
Recall from Engine Configuration that an Engine
is created via
the create_engine()
call:
engine = create_engine('mysql://scott:tiger@localhost/test')
The typical usage of create_engine()
is once per particular database
URL, held globally for the lifetime of a single application process. A single
Engine
manages many individual DBAPI connections on behalf of the
process and is intended to be called upon in a concurrent fashion. The
Engine
is not synonymous to the DBAPI connect
function,
which represents just one connection resource - the Engine
is most
efficient when created just once at the module level of an application, not
per-object or per-function call.
For a multiple-process application that uses the os.fork
system call, or
for example the Python multiprocessing
module, it’s usually required that a
separate Engine
be used for each child process. This is because the
Engine
maintains a reference to a connection pool that ultimately
references DBAPI connections - these tend to not be portable across process
boundaries. An Engine
that is configured not to use pooling (which
is achieved via the usage of NullPool
) does not have this
requirement.
The engine can be used directly to issue SQL to the database. The most generic
way is first procure a connection resource, which you get via the
Engine.connect()
method:
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print("username:", row['username'])
connection.close()
The connection is an instance of Connection
,
which is a proxy object for an actual DBAPI connection. The DBAPI
connection is retrieved from the connection pool at the point at which
Connection
is created.
The returned result is an instance of ResultProxy
, which
references a DBAPI cursor and provides a largely compatible interface
with that of the DBAPI cursor. The DBAPI cursor will be closed
by the ResultProxy
when all of its result rows (if any) are
exhausted. A ResultProxy
that returns no rows, such as that of
an UPDATE statement (without any returned rows),
releases cursor resources immediately upon construction.
When the Connection.close()
method is called, the referenced DBAPI
connection is released to the connection pool. From the perspective
of the database itself, nothing is actually “closed”, assuming pooling is
in use. The pooling mechanism issues a rollback()
call on the DBAPI
connection so that any transactional state or locks are removed, and
the connection is ready for its next usage.
The above procedure can be performed in a shorthand way by using the
Engine.execute()
method of Engine
itself:
result = engine.execute("select username from users")
for row in result:
print("username:", row['username'])
Where above, the Engine.execute()
method acquires a new
Connection
on its own, executes the statement with that object,
and returns the ResultProxy
. In this case, the ResultProxy
contains a special flag known as close_with_result
, which indicates
that when its underlying DBAPI cursor is closed, the Connection
object itself is also closed, which again returns the DBAPI connection
to the connection pool, releasing transactional resources.
If the ResultProxy
potentially has rows remaining, it can be
instructed to close out its resources explicitly:
result.close()
If the ResultProxy
has pending rows remaining and is dereferenced by
the application without being closed, Python garbage collection will
ultimately close out the cursor as well as trigger a return of the pooled
DBAPI connection resource to the pool (SQLAlchemy achieves this by the usage
of weakref callbacks - never the __del__
method) - however it’s never a
good idea to rely upon Python garbage collection to manage resources.
Our example above illustrated the execution of a textual SQL string.
The Connection.execute()
method can of course accommodate more than
that, including the variety of SQL expression constructs described
in SQL Expression Language Tutorial.
Using Transactions¶
Note
This section describes how to use transactions when working directly
with Engine
and Connection
objects. When using the
SQLAlchemy ORM, the public API for transaction control is via the
Session
object, which makes usage of the Transaction
object internally. See Managing Transactions for further
information.
The Connection
object provides a Connection.begin()
method which returns a Transaction
object.
This object is usually used within a try/except clause so that it is
guaranteed to invoke Transaction.rollback()
or Transaction.commit()
:
connection = engine.connect()
trans = connection.begin()
try:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), col1=7, col2='this is some data')
trans.commit()
except:
trans.rollback()
raise
The above block can be created more succinctly using context
managers, either given an Engine
:
# runs a transaction
with engine.begin() as connection:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), col1=7, col2='this is some data')
Or from the Connection
, in which case the Transaction
object
is available as well:
with connection.begin() as trans:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), col1=7, col2='this is some data')
Nesting of Transaction Blocks¶
The Transaction
object also handles “nested”
behavior by keeping track of the outermost begin/commit pair. In this example,
two functions both issue a transaction on a Connection
, but only the outermost
Transaction
object actually takes effect when it is committed.
# method_a starts a transaction and calls method_b
def method_a(connection):
trans = connection.begin() # open a transaction
try:
method_b(connection)
trans.commit() # transaction is committed here
except:
trans.rollback() # this rolls back the transaction unconditionally
raise
# method_b also starts a transaction
def method_b(connection):
trans = connection.begin() # open a transaction - this runs in the context of method_a's transaction
try:
connection.execute("insert into mytable values ('bat', 'lala')")
connection.execute(mytable.insert(), col1='bat', col2='lala')
trans.commit() # transaction is not committed yet
except:
trans.rollback() # this rolls back the transaction unconditionally
raise
# open a Connection and call method_a
conn = engine.connect()
method_a(conn)
conn.close()
Above, method_a
is called first, which calls connection.begin()
. Then
it calls method_b
. When method_b
calls connection.begin()
, it just
increments a counter that is decremented when it calls commit()
. If either
method_a
or method_b
calls rollback()
, the whole transaction is
rolled back. The transaction is not committed until method_a
calls the
commit()
method. This “nesting” behavior allows the creation of functions
which “guarantee” that a transaction will be used if one was not already
available, but will automatically participate in an enclosing transaction if
one exists.
Understanding Autocommit¶
The previous transaction example illustrates how to use Transaction
so that several executions can take part in the same transaction. What happens
when we issue an INSERT, UPDATE or DELETE call without using
Transaction
? While some DBAPI
implementations provide various special “non-transactional” modes, the core
behavior of DBAPI per PEP-0249 is that a transaction is always in progress,
providing only rollback()
and commit()
methods but no begin()
.
SQLAlchemy assumes this is the case for any given DBAPI.
Given this requirement, SQLAlchemy implements its own “autocommit” feature which
works completely consistently across all backends. This is achieved by
detecting statements which represent data-changing operations, i.e. INSERT,
UPDATE, DELETE, as well as data definition language (DDL) statements such as
CREATE TABLE, ALTER TABLE, and then issuing a COMMIT automatically if no
transaction is in progress. The detection is based on the presence of the
autocommit=True
execution option on the statement. If the statement
is a text-only statement and the flag is not set, a regular expression is used
to detect INSERT, UPDATE, DELETE, as well as a variety of other commands
for a particular backend:
conn = engine.connect()
conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
The “autocommit” feature is only in effect when no Transaction
has
otherwise been declared. This means the feature is not generally used with
the ORM, as the Session
object by default always maintains an
ongoing Transaction
.
Full control of the “autocommit” behavior is available using the generative
Connection.execution_options()
method provided on Connection
,
Engine
, Executable
, using the “autocommit” flag which will
turn on or off the autocommit for the selected scope. For example, a
text()
construct representing a stored procedure that commits might use
it so that a SELECT statement will issue a COMMIT:
engine.execute(text("SELECT my_mutating_procedure()").execution_options(autocommit=True))
Connectionless Execution, Implicit Execution¶
Recall from the first section we mentioned executing with and without explicit
usage of Connection
. “Connectionless” execution
refers to the usage of the execute()
method on an object which is not a
Connection
. This was illustrated using the Engine.execute()
method
of Engine
:
result = engine.execute("select username from users")
for row in result:
print("username:", row['username'])
In addition to “connectionless” execution, it is also possible
to use the Executable.execute()
method of
any Executable
construct, which is a marker for SQL expression objects
that support execution. The SQL expression object itself references an
Engine
or Connection
known as the bind, which it uses
in order to provide so-called “implicit” execution services.
Given a table as below:
from sqlalchemy import MetaData, Table, Column, Integer
meta = MetaData()
users_table = Table('users', meta,
Column('id', Integer, primary_key=True),
Column('name', String(50))
)
Explicit execution delivers the SQL text or constructed SQL expression to the
Connection.execute()
method of Connection
:
engine = create_engine('sqlite:///file.db')
connection = engine.connect()
result = connection.execute(users_table.select())
for row in result:
# ....
connection.close()
Explicit, connectionless execution delivers the expression to the
Engine.execute()
method of Engine
:
engine = create_engine('sqlite:///file.db')
result = engine.execute(users_table.select())
for row in result:
# ....
result.close()
Implicit execution is also connectionless, and makes usage of the Executable.execute()
method
on the expression itself. This method is provided as part of the
Executable
class, which refers to a SQL statement that is sufficient
for being invoked against the database. The method makes usage of
the assumption that either an
Engine
or
Connection
has been bound to the expression
object. By “bound” we mean that the special attribute MetaData.bind
has been used to associate a series of
Table
objects and all SQL constructs derived from them with a specific
engine:
engine = create_engine('sqlite:///file.db')
meta.bind = engine
result = users_table.select().execute()
for row in result:
# ....
result.close()
Above, we associate an Engine
with a MetaData
object using
the special attribute MetaData.bind
. The select()
construct produced
from the Table
object has a method Executable.execute()
, which will
search for an Engine
that’s “bound” to the Table
.
Overall, the usage of “bound metadata” has three general effects:
SQL statement objects gain an
Executable.execute()
method which automatically locates a “bind” with which to execute themselves.The ORM
Session
object supports using “bound metadata” in order to establish whichEngine
should be used to invoke SQL statements on behalf of a particular mapped class, though theSession
also features its own explicit system of establishing complexEngine
/ mapped class configurations.The
MetaData.create_all()
,MetaData.drop_all()
,Table.create()
,Table.drop()
, and “autoload” features all make usage of the boundEngine
automatically without the need to pass it explicitly.
Note
The concepts of “bound metadata” and “implicit execution” are not emphasized in modern SQLAlchemy. While they offer some convenience, they are no longer required by any API and are never necessary.
In applications where multiple Engine
objects are present, each one logically associated
with a certain set of tables (i.e. vertical sharding), the “bound metadata” technique can be used
so that individual Table
can refer to the appropriate Engine
automatically;
in particular this is supported within the ORM via the Session
object
as a means to associate Table
objects with an appropriate Engine
,
as an alternative to using the bind arguments accepted directly by the Session
.
However, the “implicit execution” technique is not at all appropriate for use with the
ORM, as it bypasses the transactional context maintained by the Session
.
Overall, in the vast majority of cases, “bound metadata” and “implicit execution” are not useful. While “bound metadata” has a marginal level of usefulness with regards to ORM configuration, “implicit execution” is a very old usage pattern that in most cases is more confusing than it is helpful, and its usage is discouraged. Both patterns seem to encourage the overuse of expedient “short cuts” in application design which lead to problems later on.
Modern SQLAlchemy usage, especially the ORM, places a heavy stress on working within the context
of a transaction at all times; the “implicit execution” concept makes the job of
associating statement execution with a particular transaction much more difficult.
The Executable.execute()
method on a particular SQL statement
usually implies that the execution is not part of any particular transaction, which is
usually not the desired effect.
In both “connectionless” examples, the
Connection
is created behind the scenes; the
ResultProxy
returned by the execute()
call references the Connection
used to issue
the SQL statement. When the ResultProxy
is closed, the underlying
Connection
is closed for us, resulting in the
DBAPI connection being returned to the pool with transactional resources removed.
Translation of Schema Names¶
To support multi-tenancy applications that distribute common sets of tables
into multiple schemas, the
Connection.execution_options.schema_translate_map
execution option may be used to repurpose a set of Table
objects
to render under different schema names without any changes.
Given a table:
user_table = Table(
'user', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(50))
)
The “schema” of this Table
as defined by the
Table.schema
attribute is None
. The
Connection.execution_options.schema_translate_map
can specify
that all Table
objects with a schema of None
would instead
render the schema as user_schema_one
:
connection = engine.connect().execution_options(
schema_translate_map={None: "user_schema_one"})
result = connection.execute(user_table.select())
The above code will invoke SQL on the database of the form:
SELECT user_schema_one.user.id, user_schema_one.user.name FROM
user_schema.user
That is, the schema name is substituted with our translated name. The map can specify any number of target->destination schemas:
connection = engine.connect().execution_options(
schema_translate_map={
None: "user_schema_one", # no schema name -> "user_schema_one"
"special": "special_schema", # schema="special" becomes "special_schema"
"public": None # Table objects with schema="public" will render with no schema
})
The Connection.execution_options.schema_translate_map
parameter
affects all DDL and SQL constructs generated from the SQL expression language,
as derived from the Table
or Sequence
objects.
It does not impact literal string SQL used via the text()
construct nor via plain strings passed to Connection.execute()
.
The feature takes effect only in those cases where the name of the
schema is derived directly from that of a Table
or Sequence
;
it does not impact methods where a string schema name is passed directly.
By this pattern, it takes effect within the “can create” / “can drop” checks
performed by methods such as MetaData.create_all()
or
MetaData.drop_all()
are called, and it takes effect when
using table reflection given a Table
object. However it does
not affect the operations present on the Inspector
object,
as the schema name is passed to these methods explicitly.
New in version 1.1.
Engine Disposal¶
The Engine
refers to a connection pool, which means under normal
circumstances, there are open database connections present while the
Engine
object is still resident in memory. When an Engine
is garbage collected, its connection pool is no longer referred to by
that Engine
, and assuming none of its connections are still checked
out, the pool and its connections will also be garbage collected, which has the
effect of closing out the actual database connections as well. But otherwise,
the Engine
will hold onto open database connections assuming
it uses the normally default pool implementation of QueuePool
.
The Engine
is intended to normally be a permanent
fixture established up-front and maintained throughout the lifespan of an
application. It is not intended to be created and disposed on a
per-connection basis; it is instead a registry that maintains both a pool
of connections as well as configurational information about the database
and DBAPI in use, as well as some degree of internal caching of per-database
resources.
However, there are many cases where it is desirable that all connection resources
referred to by the Engine
be completely closed out. It’s
generally not a good idea to rely on Python garbage collection for this
to occur for these cases; instead, the Engine
can be explicitly disposed using
the Engine.dispose()
method. This disposes of the engine’s
underlying connection pool and replaces it with a new one that’s empty.
Provided that the Engine
is discarded at this point and no longer used, all checked-in connections
which it refers to will also be fully closed.
Valid use cases for calling Engine.dispose()
include:
When a program wants to release any remaining checked-in connections held by the connection pool and expects to no longer be connected to that database at all for any future operations.
When a program uses multiprocessing or
fork()
, and anEngine
object is copied to the child process,Engine.dispose()
should be called so that the engine creates brand new database connections local to that fork. Database connections generally do not travel across process boundaries.Within test suites or multitenancy scenarios where many ad-hoc, short-lived
Engine
objects may be created and disposed.
Connections that are checked out are not discarded when the
engine is disposed or garbage collected, as these connections are still
strongly referenced elsewhere by the application.
However, after Engine.dispose()
is called, those
connections are no longer associated with that Engine
; when they
are closed, they will be returned to their now-orphaned connection pool
which will ultimately be garbage collected, once all connections which refer
to it are also no longer referenced anywhere.
Since this process is not easy to control, it is strongly recommended that
Engine.dispose()
is called only after all checked out connections
are checked in or otherwise de-associated from their pool.
An alternative for applications that are negatively impacted by the
Engine
object’s use of connection pooling is to disable pooling
entirely. This typically incurs only a modest performance impact upon the
use of new connections, and means that when a connection is checked in,
it is entirely closed out and is not held in memory. See Switching Pool Implementations
for guidelines on how to disable pooling.
Using the Threadlocal Execution Strategy¶
The “threadlocal” engine strategy is an optional feature which
can be used by non-ORM applications to associate transactions
with the current thread, such that all parts of the
application can participate in that transaction implicitly without the need to
explicitly reference a Connection
.
Note
The “threadlocal” feature is generally discouraged. It’s
designed for a particular pattern of usage which is generally
considered as a legacy pattern. It has no impact on the “thread safety”
of SQLAlchemy components
or one’s application. It also should not be used when using an ORM
Session
object, as the
Session
itself represents an ongoing
transaction and itself handles the job of maintaining connection and
transactional resources.
Enabling threadlocal
is achieved as follows:
db = create_engine('mysql://localhost/test', strategy='threadlocal')
The above Engine
will now acquire a Connection
using
connection resources derived from a thread-local variable whenever
Engine.execute()
or Engine.contextual_connect()
is called. This
connection resource is maintained as long as it is referenced, which allows
multiple points of an application to share a transaction while using
connectionless execution:
def call_operation1():
engine.execute("insert into users values (?, ?)", 1, "john")
def call_operation2():
users.update(users.c.user_id==5).execute(name='ed')
db.begin()
try:
call_operation1()
call_operation2()
db.commit()
except:
db.rollback()
Explicit execution can be mixed with connectionless execution by
using the Engine.connect()
method to acquire a Connection
that is not part of the threadlocal scope:
db.begin()
conn = db.connect()
try:
conn.execute(log_table.insert(), message="Operation started")
call_operation1()
call_operation2()
db.commit()
conn.execute(log_table.insert(), message="Operation succeeded")
except:
db.rollback()
conn.execute(log_table.insert(), message="Operation failed")
finally:
conn.close()
To access the Connection
that is bound to the threadlocal scope,
call Engine.contextual_connect()
:
conn = db.contextual_connect()
call_operation3(conn)
conn.close()
Calling Connection.close()
on the “contextual” connection does not release
its resources until all other usages of that resource are closed as well, including
that any ongoing transactions are rolled back or committed.
Working with Raw DBAPI Connections¶
There are some cases where SQLAlchemy does not provide a genericized way at accessing some DBAPI functions, such as calling stored procedures as well as dealing with multiple result sets. In these cases, it’s just as expedient to deal with the raw DBAPI connection directly.
The most common way to access the raw DBAPI connection is to get it
from an already present Connection
object directly. It is
present using the Connection.connection
attribute:
connection = engine.connect()
dbapi_conn = connection.connection
The DBAPI connection here is actually a “proxied” in terms of the
originating connection pool, however this is an implementation detail
that in most cases can be ignored. As this DBAPI connection is still
contained within the scope of an owning Connection
object, it is
best to make use of the Connection
object for most features such
as transaction control as well as calling the Connection.close()
method; if these operations are performed on the DBAPI connection directly,
the owning Connection
will not be aware of these changes in state.
To overcome the limitations imposed by the DBAPI connection that is
maintained by an owning Connection
, a DBAPI connection is also
available without the need to procure a
Connection
first, using the Engine.raw_connection()
method
of Engine
:
dbapi_conn = engine.raw_connection()
This DBAPI connection is again a “proxied” form as was the case before.
The purpose of this proxying is now apparent, as when we call the .close()
method of this connection, the DBAPI connection is typically not actually
closed, but instead released back to the
engine’s connection pool:
dbapi_conn.close()
While SQLAlchemy may in the future add built-in patterns for more DBAPI use cases, there are diminishing returns as these cases tend to be rarely needed and they also vary highly dependent on the type of DBAPI in use, so in any case the direct DBAPI calling pattern is always there for those cases where it is needed.
Some recipes for DBAPI connection use follow.
Calling Stored Procedures¶
For stored procedures with special syntactical or parameter concerns, DBAPI-level callproc may be used:
connection = engine.raw_connection()
try:
cursor = connection.cursor()
cursor.callproc("my_procedure", ['x', 'y', 'z'])
results = list(cursor.fetchall())
cursor.close()
connection.commit()
finally:
connection.close()
Multiple Result Sets¶
Multiple result set support is available from a raw DBAPI cursor using the nextset method:
connection = engine.raw_connection()
try:
cursor = connection.cursor()
cursor.execute("select * from table1; select * from table2")
results_one = cursor.fetchall()
cursor.nextset()
results_two = cursor.fetchall()
cursor.close()
finally:
connection.close()
Registering New Dialects¶
The create_engine()
function call locates the given dialect
using setuptools entrypoints. These entry points can be established
for third party dialects within the setup.py script. For example,
to create a new dialect “foodialect://”, the steps are as follows:
Create a package called
foodialect
.The package should have a module containing the dialect class, which is typically a subclass of
sqlalchemy.engine.default.DefaultDialect
. In this example let’s say it’s calledFooDialect
and its module is accessed viafoodialect.dialect
.The entry point can be established in setup.py as follows:
entry_points=""" [sqlalchemy.dialects] foodialect = foodialect.dialect:FooDialect """
If the dialect is providing support for a particular DBAPI on top of
an existing SQLAlchemy-supported database, the name can be given
including a database-qualification. For example, if FooDialect
were in fact a MySQL dialect, the entry point could be established like this:
entry_points="""
[sqlalchemy.dialects]
mysql.foodialect = foodialect.dialect:FooDialect
"""
The above entrypoint would then be accessed as create_engine("mysql+foodialect://")
.
Registering Dialects In-Process¶
SQLAlchemy also allows a dialect to be registered within the current process, bypassing
the need for separate installation. Use the register()
function as follows:
from sqlalchemy.dialects import registry
registry.register("mysql.foodialect", "myapp.dialect", "MyMySQLDialect")
The above will respond to create_engine("mysql+foodialect://")
and load the
MyMySQLDialect
class from the myapp.dialect
module.
Connection / Engine API¶
Object Name | Description |
---|---|
Interface for an object which supports execution of SQL constructs. |
|
Provides high-level functionality for a wrapped DB-API connection. |
|
A set of hooks intended to augment the construction of an
|
|
Connects a |
|
Encapsulate information about an error condition in progress. |
|
Represent a ‘nested’, or SAVEPOINT transaction. |
|
Wraps a DB-API cursor object to provide easier access to row columns. |
|
Proxy values from a single cursor row. |
|
Represent a database transaction in progress. |
|
Represent a two-phase transaction. |
- class sqlalchemy.engine.Connection(engine, connection=None, close_with_result=False, _branch_from=None, _execution_options=None, _dispatch=None, _has_events=None)¶
Provides high-level functionality for a wrapped DB-API connection.
Provides execution support for string-based SQL statements as well as
ClauseElement
,Compiled
andDefaultGenerator
objects. Provides abegin()
method to returnTransaction
objects.The Connection object is not thread-safe. While a Connection can be shared among threads using properly synchronized access, it is still possible that the underlying DBAPI connection may not support shared access between threads. Check the DBAPI documentation for details.
Members
__init__(), begin(), begin_nested(), begin_twophase(), close(), closed, connect(), connection, contextual_connect(), default_isolation_level, detach(), execute(), execution_options(), get_isolation_level(), in_transaction(), info, invalidate(), invalidated, run_callable(), scalar(), schema_for_object, transaction()
The Connection object represents a single dbapi connection checked out from the connection pool. In this state, the connection pool has no affect upon the connection, including its expiration or timeout state. For the connection pool to properly manage connections, connections should be returned to the connection pool (i.e.
connection.close()
) whenever the connection is not in use.Class signature
class
sqlalchemy.engine.Connection
(sqlalchemy.engine.Connectable
)-
method
sqlalchemy.engine.Connection.
__init__(engine, connection=None, close_with_result=False, _branch_from=None, _execution_options=None, _dispatch=None, _has_events=None)¶ Construct a new Connection.
The constructor here is not public and is only called only by an
Engine
. SeeEngine.connect()
andEngine.contextual_connect()
methods.
-
method
sqlalchemy.engine.Connection.
begin()¶ Begin a transaction and return a transaction handle.
The returned object is an instance of
Transaction
. This object represents the “scope” of the transaction, which completes when either theTransaction.rollback()
orTransaction.commit()
method is called.Nested calls to
begin()
on the sameConnection
will return newTransaction
objects that represent an emulated transaction within the scope of the enclosing transaction, that is:trans = conn.begin() # outermost transaction trans2 = conn.begin() # "nested" trans2.commit() # does nothing trans.commit() # actually commits
Calls to
Transaction.commit()
only have an effect when invoked via the outermostTransaction
object, though theTransaction.rollback()
method of any of theTransaction
objects will roll back the transaction.See also
Connection.begin_nested()
- use a SAVEPOINTConnection.begin_twophase()
- use a two phase /XID transactionEngine.begin()
- context manager available fromEngine
-
method
sqlalchemy.engine.Connection.
begin_nested()¶ Begin a nested transaction and return a transaction handle.
The returned object is an instance of
NestedTransaction
.Nested transactions require SAVEPOINT support in the underlying database. Any transaction in the hierarchy may
commit
androllback
, however the outermost transaction still controls the overallcommit
orrollback
of the transaction of a whole.
-
method
sqlalchemy.engine.Connection.
begin_twophase(xid=None)¶ Begin a two-phase or XA transaction and return a transaction handle.
The returned object is an instance of
TwoPhaseTransaction
, which in addition to the methods provided byTransaction
, also provides aTwoPhaseTransaction.prepare()
method.- Parameters:
xid – the two phase transaction id. If not supplied, a random id will be generated.
-
method
sqlalchemy.engine.Connection.
close()¶ Close this
Connection
.This results in a release of the underlying database resources, that is, the DBAPI connection referenced internally. The DBAPI connection is typically restored back to the connection-holding
Pool
referenced by theEngine
that produced thisConnection
. Any transactional state present on the DBAPI connection is also unconditionally released via the DBAPI connection’srollback()
method, regardless of anyTransaction
object that may be outstanding with regards to thisConnection
.After
Connection.close()
is called, theConnection
is permanently in a closed state, and will allow no further operations.
-
attribute
sqlalchemy.engine.Connection.
closed¶ Return True if this connection is closed.
-
method
sqlalchemy.engine.Connection.
connect()¶ Returns a branched version of this
Connection
.The
Connection.close()
method on the returnedConnection
can be called and thisConnection
will remain open.This method provides usage symmetry with
Engine.connect()
, including for usage with context managers.
-
attribute
sqlalchemy.engine.Connection.
connection¶ The underlying DB-API connection managed by this Connection.
See also
-
method
sqlalchemy.engine.Connection.
contextual_connect(**kwargs)¶ Returns a branched version of this
Connection
.The
Connection.close()
method on the returnedConnection
can be called and thisConnection
will remain open.This method provides usage symmetry with
Engine.contextual_connect()
, including for usage with context managers.
-
attribute
sqlalchemy.engine.Connection.
default_isolation_level¶ The default isolation level assigned to this
Connection
.This is the isolation level setting that the
Connection
has when first procured via theEngine.connect()
method. This level stays in place until theConnection.execution_options.isolation_level
is used to change the setting on a per-Connection
basis.Unlike
Connection.get_isolation_level()
, this attribute is set ahead of time from the first connection procured by the dialect, so SQL query is not invoked when this accessor is called.New in version 0.9.9.
See also
Connection.get_isolation_level()
- view current levelcreate_engine.isolation_level
- set perEngine
isolation levelConnection.execution_options.isolation_level
- set perConnection
isolation level
-
method
sqlalchemy.engine.Connection.
detach()¶ Detach the underlying DB-API connection from its connection pool.
E.g.:
with engine.connect() as conn: conn.detach() conn.execute("SET search_path TO schema1, schema2") # work with connection # connection is fully closed (since we used "with:", can # also call .close())
This
Connection
instance will remain usable. When closed (or exited from a context manager context as above), the DB-API connection will be literally closed and not returned to its originating pool.This method can be used to insulate the rest of an application from a modified state on a connection (such as a transaction isolation level or similar).
-
method
sqlalchemy.engine.Connection.
execute(object_, *multiparams, **params)¶ Executes a SQL statement construct and returns a
ResultProxy
.- Parameters:
object –
The statement to be executed. May be one of:
a plain string
any
ClauseElement
construct that is also a subclass ofExecutable
, such as aselect()
constructa
FunctionElement
, such as that generated byfunc
, will be automatically wrapped in a SELECT statement, which is then executed.a
DDLElement
objecta
DefaultGenerator
objecta
Compiled
object
*multiparams/**params –
represent bound parameter values to be used in the execution. Typically, the format is either a collection of one or more dictionaries passed to *multiparams:
conn.execute( table.insert(), {"id":1, "value":"v1"}, {"id":2, "value":"v2"} )
…or individual key/values interpreted by **params:
conn.execute( table.insert(), id=1, value="v1" )
In the case that a plain SQL string is passed, and the underlying DBAPI accepts positional bind parameters, a collection of tuples or individual values in *multiparams may be passed:
conn.execute( "INSERT INTO table (id, value) VALUES (?, ?)", (1, "v1"), (2, "v2") ) conn.execute( "INSERT INTO table (id, value) VALUES (?, ?)", 1, "v1" )
Note above, the usage of a question mark “?” or other symbol is contingent upon the “paramstyle” accepted by the DBAPI in use, which may be any of “qmark”, “named”, “pyformat”, “format”, “numeric”. See pep-249 for details on paramstyle.
To execute a textual SQL statement which uses bound parameters in a DBAPI-agnostic way, use the
text()
construct.
-
method
sqlalchemy.engine.Connection.
execution_options(**opt)¶ Set non-SQL options for the connection which take effect during execution.
The method returns a copy of this
Connection
which references the same underlying DBAPI connection, but also defines the given execution options which will take effect for a call toexecute()
. As the newConnection
references the same underlying resource, it’s usually a good idea to ensure that the copies will be discarded immediately, which is implicit if used as in:result = connection.execution_options(stream_results=True).\ execute(stmt)
Note that any key/value can be passed to
Connection.execution_options()
, and it will be stored in the_execution_options
dictionary of theConnection
. It is suitable for usage by end-user schemes to communicate with event listeners, for example.The keywords that are currently recognized by SQLAlchemy itself include all those listed under
Executable.execution_options()
, as well as others that are specific toConnection
.- Parameters:
autocommit – Available on: Connection, statement. When True, a COMMIT will be invoked after execution when executed in ‘autocommit’ mode, i.e. when an explicit transaction is not begun on the connection. Note that DBAPI connections by default are always in a transaction - SQLAlchemy uses rules applied to different kinds of statements to determine if COMMIT will be invoked in order to provide its “autocommit” feature. Typically, all INSERT/UPDATE/DELETE statements as well as CREATE/DROP statements have autocommit behavior enabled; SELECT constructs do not. Use this option when invoking a SELECT or other specific SQL construct where COMMIT is desired (typically when calling stored procedures and such), and an explicit transaction is not in progress.
compiled_cache –
Available on: Connection. A dictionary where
Compiled
objects will be cached when theConnection
compiles a clause expression into aCompiled
object. It is the user’s responsibility to manage the size of this dictionary, which will have keys corresponding to the dialect, clause element, the column names within the VALUES or SET clause of an INSERT or UPDATE, as well as the “batch” mode for an INSERT or UPDATE statement. The format of this dictionary is not guaranteed to stay the same in future releases.Note that the ORM makes use of its own “compiled” caches for some operations, including flush operations. The caching used by the ORM internally supersedes a cache dictionary specified here.
isolation_level –
Available on:
Connection
. Set the transaction isolation level for the lifespan of thisConnection
object (not the underlying DBAPI connection, for which the level is reset to its original setting upon termination of thisConnection
object).Valid values include those string values accepted by the
create_engine.isolation_level
parameter passed tocreate_engine()
. These levels are semi-database specific; see individual dialect documentation for valid levels.Note that this option necessarily affects the underlying DBAPI connection for the lifespan of the originating
Connection
, and is not per-execution. This setting is not removed until the underlying DBAPI connection is returned to the connection pool, i.e. theConnection.close()
method is called.Warning
The
isolation_level
execution option should not be used when a transaction is already established, that is, theConnection.begin()
method or similar has been called. A database cannot change the isolation level on a transaction in progress, and different DBAPIs and/or SQLAlchemy dialects may implicitly roll back or commit the transaction, or not affect the connection at all.Changed in version 0.9.9: A warning is emitted when the
isolation_level
execution option is used after a transaction has been started withConnection.begin()
or similar.Note
The
isolation_level
execution option is implicitly reset if theConnection
is invalidated, e.g. via theConnection.invalidate()
method, or if a disconnection error occurs. The new connection produced after the invalidation will not have the isolation level re-applied to it automatically.See also
create_engine.isolation_level
- set perEngine
isolation levelConnection.get_isolation_level()
- view current levelPostgreSQL Transaction Isolation
SQL Server Transaction Isolation
Setting Transaction Isolation Levels - for the ORM
no_parameters – When
True
, if the final parameter list or dictionary is totally empty, will invoke the statement on the cursor ascursor.execute(statement)
, not passing the parameter collection at all. Some DBAPIs such as psycopg2 and mysql-python consider percent signs as significant only when parameters are present; this option allows code to generate SQL containing percent signs (and possibly other characters) that is neutral regarding whether it’s executed by the DBAPI or piped into a script that’s later invoked by command line tools.stream_results – Available on: Connection, statement. Indicate to the dialect that results should be “streamed” and not pre-buffered, if possible. This is a limitation of many DBAPIs. The flag is currently understood only by the psycopg2, mysqldb and pymysql dialects.
schema_translate_map –
Available on: Connection, Engine. A dictionary mapping schema names to schema names, that will be applied to the
Table.schema
element of eachTable
encountered when SQL or DDL expression elements are compiled into strings; the resulting schema name will be converted based on presence in the map of the original name.New in version 1.1.
See also
-
method
sqlalchemy.engine.Connection.
get_isolation_level()¶ Return the current isolation level assigned to this
Connection
.This will typically be the default isolation level as determined by the dialect, unless if the
Connection.execution_options.isolation_level
feature has been used to alter the isolation level on a per-Connection
basis.This attribute will typically perform a live SQL operation in order to procure the current isolation level, so the value returned is the actual level on the underlying DBAPI connection regardless of how this state was set. Compare to the
Connection.default_isolation_level
accessor which returns the dialect-level setting without performing a SQL query.New in version 0.9.9.
See also
Connection.default_isolation_level
- view default levelcreate_engine.isolation_level
- set perEngine
isolation levelConnection.execution_options.isolation_level
- set perConnection
isolation level
-
method
sqlalchemy.engine.Connection.
in_transaction()¶ Return True if a transaction is in progress.
-
attribute
sqlalchemy.engine.Connection.
info¶ Info dictionary associated with the underlying DBAPI connection referred to by this
Connection
, allowing user-defined data to be associated with the connection.The data here will follow along with the DBAPI connection including after it is returned to the connection pool and used again in subsequent instances of
Connection
.
-
method
sqlalchemy.engine.Connection.
invalidate(exception=None)¶ Invalidate the underlying DBAPI connection associated with this
Connection
.The underlying DBAPI connection is literally closed (if possible), and is discarded. Its source connection pool will typically lazily create a new connection to replace it.
Upon the next use (where “use” typically means using the
Connection.execute()
method or similar), thisConnection
will attempt to procure a new DBAPI connection using the services of thePool
as a source of connectivity (e.g. a “reconnection”).If a transaction was in progress (e.g. the
Connection.begin()
method has been called) whenConnection.invalidate()
method is called, at the DBAPI level all state associated with this transaction is lost, as the DBAPI connection is closed. TheConnection
will not allow a reconnection to proceed until theTransaction
object is ended, by calling theTransaction.rollback()
method; until that point, any attempt at continuing to use theConnection
will raise anInvalidRequestError
. This is to prevent applications from accidentally continuing an ongoing transactional operations despite the fact that the transaction has been lost due to an invalidation.The
Connection.invalidate()
method, just like auto-invalidation, will at the connection pool level invoke thePoolEvents.invalidate()
event.See also
-
attribute
sqlalchemy.engine.Connection.
invalidated¶ Return True if this connection was invalidated.
-
method
sqlalchemy.engine.Connection.
run_callable(callable_, *args, **kwargs)¶ Given a callable object or function, execute it, passing a
Connection
as the first argument.The given *args and **kwargs are passed subsequent to the
Connection
argument.This function, along with
Engine.run_callable()
, allows a function to be run with aConnection
orEngine
object without the need to know which one is being dealt with.
-
method
sqlalchemy.engine.Connection.
scalar(object_, *multiparams, **params)¶ Executes and returns the first column of the first row.
The underlying result/cursor is closed after execution.
-
attribute
sqlalchemy.engine.Connection.
schema_for_object = <sqlalchemy.sql.schema._SchemaTranslateMap object>¶ Return the “.schema” attribute for an object.
Used for
Table
,Sequence
and similar objects, and takes into account theConnection.execution_options.schema_translate_map
parameter.New in version 1.1.
See also
-
method
sqlalchemy.engine.Connection.
transaction(callable_, *args, **kwargs)¶ Execute the given function within a transaction boundary.
The function is passed this
Connection
as the first argument, followed by the given *args and **kwargs, e.g.:def do_something(conn, x, y): conn.execute("some statement", {'x':x, 'y':y}) conn.transaction(do_something, 5, 10)
The operations inside the function are all invoked within the context of a single
Transaction
. Upon success, the transaction is committed. If an exception is raised, the transaction is rolled back before propagating the exception.Note
The
transaction()
method is superseded by the usage of the Pythonwith:
statement, which can be used withConnection.begin()
:with conn.begin(): conn.execute("some statement", {'x':5, 'y':10})
As well as with
Engine.begin()
:with engine.begin() as conn: conn.execute("some statement", {'x':5, 'y':10})
See also
Engine.begin()
- engine-level transactional contextEngine.transaction()
- engine-level version ofConnection.transaction()
-
method
- class sqlalchemy.engine.Connectable¶
Interface for an object which supports execution of SQL constructs.
The two implementations of
Connectable
areConnection
andEngine
.Connectable must also implement the ‘dialect’ member which references a
Dialect
instance.-
method
sqlalchemy.engine.Connectable.
connect(**kwargs)¶ Return a
Connection
object.Depending on context, this may be
self
if this object is already an instance ofConnection
, or a newly procuredConnection
if this object is an instance ofEngine
.
-
method
sqlalchemy.engine.Connectable.
contextual_connect()¶ Return a
Connection
object which may be part of an ongoing context.Depending on context, this may be
self
if this object is already an instance ofConnection
, or a newly procuredConnection
if this object is an instance ofEngine
.
-
method
sqlalchemy.engine.Connectable.
create(entity, **kwargs)¶ Emit CREATE statements for the given schema entity.
Deprecated since version 0.7: The
Connectable.create()
method is deprecated and will be removed in a future release. Please use the.create()
method on specific schema objects to emit DDL sequences, includingTable.create()
,Index.create()
, andMetaData.create_all()
.
-
method
sqlalchemy.engine.Connectable.
drop(entity, **kwargs)¶ Emit DROP statements for the given schema entity.
Deprecated since version 0.7: The
Connectable.drop()
method is deprecated and will be removed in a future release. Please use the.drop()
method on specific schema objects to emit DDL sequences, includingTable.drop()
,Index.drop()
, andMetaData.drop_all()
.
-
method
sqlalchemy.engine.Connectable.
execute(object_, *multiparams, **params)¶ Executes the given construct and returns a
ResultProxy
.
-
method
sqlalchemy.engine.Connectable.
scalar(object_, *multiparams, **params)¶ Executes and returns the first column of the first row.
The underlying cursor is closed after execution.
-
method
- class sqlalchemy.engine.CreateEnginePlugin(url, kwargs)¶
A set of hooks intended to augment the construction of an
Engine
object based on entrypoint names in a URL.The purpose of
CreateEnginePlugin
is to allow third-party systems to apply engine, pool and dialect level event listeners without the need for the target application to be modified; instead, the plugin names can be added to the database URL. Target applications forCreateEnginePlugin
include:connection and SQL performance tools, e.g. which use events to track number of checkouts and/or time spent with statements
connectivity plugins such as proxies
Plugins are registered using entry points in a similar way as that of dialects:
entry_points={ 'sqlalchemy.plugins': [ 'myplugin = myapp.plugins:MyPlugin' ]
A plugin that uses the above names would be invoked from a database URL as in:
from sqlalchemy import create_engine engine = create_engine( "mysql+pymysql://scott:tiger@localhost/test?plugin=myplugin")
Alternatively, the
create_engine.plugins" argument may be passed as a list to :func:
.create_engine`:engine = create_engine( "mysql+pymysql://scott:tiger@localhost/test", plugins=["myplugin"])
New in version 1.2.3: plugin names can also be specified to
create_engine()
as a listThe
plugin
argument supports multiple instances, so that a URL may specify multiple plugins; they are loaded in the order stated in the URL:engine = create_engine( "mysql+pymysql://scott:tiger@localhost/" "test?plugin=plugin_one&plugin=plugin_twp&plugin=plugin_three")
A plugin can receive additional arguments from the URL string as well as from the keyword arguments passed to
create_engine()
. TheURL
object and the keyword dictionary are passed to the constructor so that these arguments can be extracted from the url’sURL.query
collection as well as from the dictionary:class MyPlugin(CreateEnginePlugin): def __init__(self, url, kwargs): self.my_argument_one = url.query.pop('my_argument_one') self.my_argument_two = url.query.pop('my_argument_two') self.my_argument_three = kwargs.pop('my_argument_three', None)
Arguments like those illustrated above would be consumed from the following:
from sqlalchemy import create_engine engine = create_engine( "mysql+pymysql://scott:tiger@localhost/" "test?plugin=myplugin&my_argument_one=foo&my_argument_two=bar", my_argument_three='bat')
The URL and dictionary are used for subsequent setup of the engine as they are, so the plugin can modify their arguments in-place. Arguments that are only understood by the plugin should be popped or otherwise removed so that they aren’t interpreted as erroneous arguments afterwards.
When the engine creation process completes and produces the
Engine
object, it is again passed to the plugin via theCreateEnginePlugin.engine_created()
hook. In this hook, additional changes can be made to the engine, most typically involving setup of events (e.g. those defined in Core Events).New in version 1.1.
-
method
sqlalchemy.engine.CreateEnginePlugin.
__init__(url, kwargs)¶ Construct a new
CreateEnginePlugin
.The plugin object is instantiated individually for each call to
create_engine()
. A singleEngine
will be passed to theCreateEnginePlugin.engine_created()
method corresponding to this URL.- Parameters:
url – the
URL
object. The plugin should inspect what it needs here as well as remove its custom arguments from theURL.query
collection. The URL can be modified in-place in any other way as well.kwargs – The keyword arguments passed to :func`.create_engine`. The plugin can read and modify this dictionary in-place, to affect the ultimate arguments used to create the engine. It should remove its custom arguments from the dictionary as well.
-
method
sqlalchemy.engine.CreateEnginePlugin.
engine_created(engine)¶ Receive the
Engine
object when it is fully constructed.The plugin may make additional changes to the engine, such as registering engine or connection pool events.
-
method
sqlalchemy.engine.CreateEnginePlugin.
handle_dialect_kwargs(dialect_cls, dialect_args)¶ parse and modify dialect kwargs
-
method
sqlalchemy.engine.CreateEnginePlugin.
handle_pool_kwargs(pool_cls, pool_args)¶ parse and modify pool kwargs
- class sqlalchemy.engine.Engine(pool, dialect, url, logging_name=None, echo=None, proxy=None, execution_options=None)¶
Connects a
Pool
andDialect
together to provide a source of database connectivity and behavior.An
Engine
object is instantiated publicly using thecreate_engine()
function.Members
begin(), connect(), contextual_connect(), dispose(), driver, execute(), execution_options(), has_table(), name, raw_connection(), run_callable(), scalar(), schema_for_object, table_names(), transaction(), update_execution_options()
Class signature
class
sqlalchemy.engine.Engine
(sqlalchemy.engine.Connectable
,sqlalchemy.log.Identified
)-
method
sqlalchemy.engine.Engine.
begin(close_with_result=False)¶ Return a context manager delivering a
Connection
with aTransaction
established.E.g.:
with engine.begin() as conn: conn.execute("insert into table (x, y, z) values (1, 2, 3)") conn.execute("my_special_procedure(5)")
Upon successful operation, the
Transaction
is committed. If an error is raised, theTransaction
is rolled back.The
close_with_result
flag is normallyFalse
, and indicates that theConnection
will be closed when the operation is complete. When set toTrue
, it indicates theConnection
is in “single use” mode, where theResultProxy
returned by the first call toConnection.execute()
will close theConnection
when thatResultProxy
has exhausted all result rows.See also
Engine.connect()
- procure aConnection
from anEngine
.Connection.begin()
- start aTransaction
for a particularConnection
.
-
method
sqlalchemy.engine.Engine.
connect(**kwargs)¶ Return a new
Connection
object.The
Connection
object is a facade that uses a DBAPI connection internally in order to communicate with the database. This connection is procured from the connection-holdingPool
referenced by thisEngine
. When theConnection.close()
method of theConnection
object is called, the underlying DBAPI connection is then returned to the connection pool, where it may be used again in a subsequent call toEngine.connect()
.
-
method
sqlalchemy.engine.Engine.
contextual_connect(close_with_result=False, **kwargs)¶ Return a
Connection
object which may be part of some ongoing context.By default, this method does the same thing as
Engine.connect()
. Subclasses ofEngine
may override this method to provide contextual behavior.- Parameters:
close_with_result – When True, the first
ResultProxy
created by theConnection
will call theConnection.close()
method of that connection as soon as any pending result rows are exhausted. This is used to supply the “connectionless execution” behavior provided by theEngine.execute()
method.
-
method
sqlalchemy.engine.Engine.
dispose()¶ Dispose of the connection pool used by this
Engine
.This has the effect of fully closing all currently checked in database connections. Connections that are still checked out will not be closed, however they will no longer be associated with this
Engine
, so when they are closed individually, eventually thePool
which they are associated with will be garbage collected and they will be closed out fully, if not already closed on checkin.A new connection pool is created immediately after the old one has been disposed. This new pool, like all SQLAlchemy connection pools, does not make any actual connections to the database until one is first requested, so as long as the
Engine
isn’t used again, no new connections will be made.See also
-
attribute
sqlalchemy.engine.Engine.
driver¶
-
method
sqlalchemy.engine.Engine.
execute(statement, *multiparams, **params)¶ Executes the given construct and returns a
ResultProxy
.The arguments are the same as those used by
Connection.execute()
.Here, a
Connection
is acquired using theEngine.contextual_connect()
method, and the statement executed with that connection. The returnedResultProxy
is flagged such that when theResultProxy
is exhausted and its underlying cursor is closed, theConnection
created here will also be closed, which allows its associated DBAPI connection resource to be returned to the connection pool.
-
method
sqlalchemy.engine.Engine.
execution_options(**opt)¶ Return a new
Engine
that will provideConnection
objects with the given execution options.The returned
Engine
remains related to the originalEngine
in that it shares the same connection pool and other state:The
Pool
used by the newEngine
is the same instance. TheEngine.dispose()
method will replace the connection pool instance for the parent engine as well as this one.Event listeners are “cascaded” - meaning, the new
Engine
inherits the events of the parent, and new events can be associated with the newEngine
individually.The logging configuration and logging_name is copied from the parent
Engine
.
The intent of the
Engine.execution_options()
method is to implement “sharding” schemes where multipleEngine
objects refer to the same connection pool, but are differentiated by options that would be consumed by a custom event:primary_engine = create_engine("mysql://") shard1 = primary_engine.execution_options(shard_id="shard1") shard2 = primary_engine.execution_options(shard_id="shard2")
Above, the
shard1
engine serves as a factory forConnection
objects that will contain the execution optionshard_id=shard1
, andshard2
will produceConnection
objects that contain the execution optionshard_id=shard2
.An event handler can consume the above execution option to perform a schema switch or other operation, given a connection. Below we emit a MySQL
use
statement to switch databases, at the same time keeping track of which database we’ve established using theConnection.info
dictionary, which gives us a persistent storage space that follows the DBAPI connection:from sqlalchemy import event from sqlalchemy.engine import Engine shards = {"default": "base", shard_1: "db1", "shard_2": "db2"} @event.listens_for(Engine, "before_cursor_execute") def _switch_shard(conn, cursor, stmt, params, context, executemany): shard_id = conn._execution_options.get('shard_id', "default") current_shard = conn.info.get("current_shard", None) if current_shard != shard_id: cursor.execute("use %s" % shards[shard_id]) conn.info["current_shard"] = shard_id
See also
Connection.execution_options()
- update execution options on aConnection
object.Engine.update_execution_options()
- update the execution options for a givenEngine
in place.
-
method
sqlalchemy.engine.Engine.
has_table(table_name, schema=None)¶ Return True if the given backend has a table of the given name.
See also
Fine Grained Reflection with Inspector - detailed schema inspection using the
Inspector
interface.quoted_name
- used to pass quoting information along with a schema identifier.
-
attribute
sqlalchemy.engine.Engine.
name¶
-
method
sqlalchemy.engine.Engine.
raw_connection(_connection=None)¶ Return a “raw” DBAPI connection from the connection pool.
The returned object is a proxied version of the DBAPI connection object used by the underlying driver in use. The object will have all the same behavior as the real DBAPI connection, except that its
close()
method will result in the connection being returned to the pool, rather than being closed for real.This method provides direct DBAPI connection access for special situations when the API provided by
Connection
is not needed. When aConnection
object is already present, the DBAPI connection is available using theConnection.connection
accessor.See also
-
method
sqlalchemy.engine.Engine.
run_callable(callable_, *args, **kwargs)¶ Given a callable object or function, execute it, passing a
Connection
as the first argument.The given *args and **kwargs are passed subsequent to the
Connection
argument.This function, along with
Connection.run_callable()
, allows a function to be run with aConnection
orEngine
object without the need to know which one is being dealt with.
-
method
sqlalchemy.engine.Engine.
scalar(statement, *multiparams, **params)¶ Executes and returns the first column of the first row.
The underlying cursor is closed after execution.
-
attribute
sqlalchemy.engine.Engine.
schema_for_object = <sqlalchemy.sql.schema._SchemaTranslateMap object>¶ Return the “.schema” attribute for an object.
Used for
Table
,Sequence
and similar objects, and takes into account theConnection.execution_options.schema_translate_map
parameter.New in version 1.1.
See also
-
method
sqlalchemy.engine.Engine.
table_names(schema=None, connection=None)¶ Return a list of all table names available in the database.
- Parameters:
schema – Optional, retrieve names from a non-default schema.
connection – Optional, use a specified connection. Default is the
contextual_connect
for thisEngine
.
-
method
sqlalchemy.engine.Engine.
transaction(callable_, *args, **kwargs)¶ Execute the given function within a transaction boundary.
The function is passed a
Connection
newly procured fromEngine.contextual_connect()
as the first argument, followed by the given *args and **kwargs.e.g.:
def do_something(conn, x, y): conn.execute("some statement", {'x':x, 'y':y}) engine.transaction(do_something, 5, 10)
The operations inside the function are all invoked within the context of a single
Transaction
. Upon success, the transaction is committed. If an exception is raised, the transaction is rolled back before propagating the exception.Note
The
transaction()
method is superseded by the usage of the Pythonwith:
statement, which can be used withEngine.begin()
:with engine.begin() as conn: conn.execute("some statement", {'x':5, 'y':10})
See also
Engine.begin()
- engine-level transactional contextConnection.transaction()
- connection-level version ofEngine.transaction()
-
method
sqlalchemy.engine.Engine.
update_execution_options(**opt)¶ Update the default execution_options dictionary of this
Engine
.The given keys/values in **opt are added to the default execution options that will be used for all connections. The initial contents of this dictionary can be sent via the
execution_options
parameter tocreate_engine()
.
-
method
- class sqlalchemy.engine.ExceptionContext¶
Encapsulate information about an error condition in progress.
This object exists solely to be passed to the
ConnectionEvents.handle_error()
event, supporting an interface that can be extended without backwards-incompatibility.Members
chained_exception, connection, cursor, engine, execution_context, invalidate_pool_on_disconnect, is_disconnect, original_exception, parameters, sqlalchemy_exception, statement
New in version 0.9.7.
-
attribute
sqlalchemy.engine.ExceptionContext.
chained_exception = None¶ The exception that was returned by the previous handler in the exception chain, if any.
If present, this exception will be the one ultimately raised by SQLAlchemy unless a subsequent handler replaces it.
May be None.
-
attribute
sqlalchemy.engine.ExceptionContext.
connection = None¶ The
Connection
in use during the exception.This member is present, except in the case of a failure when first connecting.
See also
-
attribute
sqlalchemy.engine.ExceptionContext.
cursor = None¶ The DBAPI cursor object.
May be None.
-
attribute
sqlalchemy.engine.ExceptionContext.
engine = None¶ The
Engine
in use during the exception.This member should always be present, even in the case of a failure when first connecting.
New in version 1.0.0.
-
attribute
sqlalchemy.engine.ExceptionContext.
execution_context = None¶ The
ExecutionContext
corresponding to the execution operation in progress.This is present for statement execution operations, but not for operations such as transaction begin/end. It also is not present when the exception was raised before the
ExecutionContext
could be constructed.Note that the
ExceptionContext.statement
andExceptionContext.parameters
members may represent a different value than that of theExecutionContext
, potentially in the case where aConnectionEvents.before_cursor_execute()
event or similar modified the statement/parameters to be sent.May be None.
-
attribute
sqlalchemy.engine.ExceptionContext.
invalidate_pool_on_disconnect = True¶ Represent whether all connections in the pool should be invalidated when a “disconnect” condition is in effect.
Setting this flag to False within the scope of the
ConnectionEvents.handle_error()
event will have the effect such that the full collection of connections in the pool will not be invalidated during a disconnect; only the current connection that is the subject of the error will actually be invalidated.The purpose of this flag is for custom disconnect-handling schemes where the invalidation of other connections in the pool is to be performed based on other conditions, or even on a per-connection basis.
New in version 1.0.3.
-
attribute
sqlalchemy.engine.ExceptionContext.
is_disconnect = None¶ Represent whether the exception as occurred represents a “disconnect” condition.
This flag will always be True or False within the scope of the
ConnectionEvents.handle_error()
handler.SQLAlchemy will defer to this flag in order to determine whether or not the connection should be invalidated subsequently. That is, by assigning to this flag, a “disconnect” event which then results in a connection and pool invalidation can be invoked or prevented by changing this flag.
-
attribute
sqlalchemy.engine.ExceptionContext.
original_exception = None¶ The exception object which was caught.
This member is always present.
-
attribute
sqlalchemy.engine.ExceptionContext.
parameters = None¶ Parameter collection that was emitted directly to the DBAPI.
May be None.
-
attribute
sqlalchemy.engine.ExceptionContext.
sqlalchemy_exception = None¶ The
sqlalchemy.exc.StatementError
which wraps the original, and will be raised if exception handling is not circumvented by the event.May be None, as not all exception types are wrapped by SQLAlchemy. For DBAPI-level exceptions that subclass the dbapi’s Error class, this field will always be present.
-
attribute
sqlalchemy.engine.ExceptionContext.
statement = None¶ String SQL statement that was emitted directly to the DBAPI.
May be None.
-
attribute
- class sqlalchemy.engine.NestedTransaction(connection, parent)¶
Represent a ‘nested’, or SAVEPOINT transaction.
A new
NestedTransaction
object may be procured using theConnection.begin_nested()
method.The interface is the same as that of
Transaction
.Class signature
class
sqlalchemy.engine.NestedTransaction
(sqlalchemy.engine.Transaction
)
- class sqlalchemy.engine.ResultProxy(context)¶
Wraps a DB-API cursor object to provide easier access to row columns.
Individual columns may be accessed by their integer position, case-insensitive column name, or by
schema.Column
object. e.g.:row = fetchone() col1 = row[0] # access via integer position col2 = row['col2'] # access via name col3 = row[mytable.c.mycol] # access via Column object.
Members
_soft_close(), close(), fetchall(), fetchmany(), fetchone(), first(), inserted_primary_key, is_insert, keys(), last_inserted_params(), last_updated_params(), lastrow_has_defaults(), lastrowid, next(), postfetch_cols(), prefetch_cols(), returned_defaults, returns_rows, rowcount, scalar(), supports_sane_multi_rowcount(), supports_sane_rowcount()
ResultProxy
also handles post-processing of result column data usingTypeEngine
objects, which are referenced from the originating SQL statement that produced this result set.-
method
sqlalchemy.engine.ResultProxy.
_soft_close()¶ Soft close this
ResultProxy
.This releases all DBAPI cursor resources, but leaves the ResultProxy “open” from a semantic perspective, meaning the fetchXXX() methods will continue to return empty results.
This method is called automatically when:
all result rows are exhausted using the fetchXXX() methods.
cursor.description is None.
This method is not public, but is documented in order to clarify the “autoclose” process used.
New in version 1.0.0.
See also
-
method
sqlalchemy.engine.ResultProxy.
close()¶ Close this ResultProxy.
This closes out the underlying DBAPI cursor corresponding to the statement execution, if one is still present. Note that the DBAPI cursor is automatically released when the
ResultProxy
exhausts all available rows.ResultProxy.close()
is generally an optional method except in the case when discarding aResultProxy
that still has additional rows pending for fetch.In the case of a result that is the product of connectionless execution, the underlying
Connection
object is also closed, which releases DBAPI connection resources.After this method is called, it is no longer valid to call upon the fetch methods, which will raise a
ResourceClosedError
on subsequent use.Changed in version 1.0.0: - the
ResultProxy.close()
method has been separated out from the process that releases the underlying DBAPI cursor resource. The “auto close” feature of theConnection
now performs a so-called “soft close”, which releases the underlying DBAPI cursor, but allows theResultProxy
to still behave as an open-but-exhausted result set; the actualResultProxy.close()
method is never called. It is still safe to discard aResultProxy
that has been fully exhausted without calling this method.
-
method
sqlalchemy.engine.ResultProxy.
fetchall()¶ Fetch all rows, just like DB-API
cursor.fetchall()
.After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Subsequent calls to
ResultProxy.fetchall()
will return an empty list. After theResultProxy.close()
method is called, the method will raiseResourceClosedError
.Changed in version 1.0.0: - Added “soft close” behavior which allows the result to be used in an “exhausted” state prior to calling the
ResultProxy.close()
method.
-
method
sqlalchemy.engine.ResultProxy.
fetchmany(size=None)¶ Fetch many rows, just like DB-API
cursor.fetchmany(size=cursor.arraysize)
.After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Calls to
ResultProxy.fetchmany()
after all rows have been exhausted will return an empty list. After theResultProxy.close()
method is called, the method will raiseResourceClosedError
.Changed in version 1.0.0: - Added “soft close” behavior which allows the result to be used in an “exhausted” state prior to calling the
ResultProxy.close()
method.
-
method
sqlalchemy.engine.ResultProxy.
fetchone()¶ Fetch one row, just like DB-API
cursor.fetchone()
.After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Calls to
ResultProxy.fetchone()
after all rows have been exhausted will returnNone
. After theResultProxy.close()
method is called, the method will raiseResourceClosedError
.Changed in version 1.0.0: - Added “soft close” behavior which allows the result to be used in an “exhausted” state prior to calling the
ResultProxy.close()
method.
-
method
sqlalchemy.engine.ResultProxy.
first()¶ Fetch the first row and then close the result set unconditionally.
Returns None if no row is present.
After calling this method, the object is fully closed, e.g. the
ResultProxy.close()
method will have been called.
-
attribute
sqlalchemy.engine.ResultProxy.
inserted_primary_key¶ Return the primary key for the row just inserted.
The return value is a list of scalar values corresponding to the list of primary key columns in the target table.
This only applies to single row
insert()
constructs which did not explicitly specifyInsert.returning()
.Note that primary key columns which specify a server_default clause, or otherwise do not qualify as “autoincrement” columns (see the notes at
Column
), and were generated using the database-side default, will appear in this list asNone
unless the backend supports “returning” and the insert statement executed with the “implicit returning” enabled.Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an insert() construct.
-
attribute
sqlalchemy.engine.ResultProxy.
is_insert¶ True if this
ResultProxy
is the result of a executing an expression language compiledinsert()
construct.When True, this implies that the
inserted_primary_key
attribute is accessible, assuming the statement did not include a user defined “returning” construct.
-
method
sqlalchemy.engine.ResultProxy.
keys()¶ Return the current set of string keys for rows.
-
method
sqlalchemy.engine.ResultProxy.
last_inserted_params()¶ Return the collection of inserted parameters from this execution.
Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an insert() construct.
-
method
sqlalchemy.engine.ResultProxy.
last_updated_params()¶ Return the collection of updated parameters from this execution.
Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an update() construct.
-
method
sqlalchemy.engine.ResultProxy.
lastrow_has_defaults()¶ Return
lastrow_has_defaults()
from the underlyingExecutionContext
.See
ExecutionContext
for details.
-
attribute
sqlalchemy.engine.ResultProxy.
lastrowid¶ return the ‘lastrowid’ accessor on the DBAPI cursor.
This is a DBAPI specific method and is only functional for those backends which support it, for statements where it is appropriate. It’s behavior is not consistent across backends.
Usage of this method is normally unnecessary when using insert() expression constructs; the
ResultProxy.inserted_primary_key
attribute provides a tuple of primary key values for a newly inserted row, regardless of database backend.
-
method
sqlalchemy.engine.ResultProxy.
next()¶ Implement the next() protocol.
New in version 1.2.
-
method
sqlalchemy.engine.ResultProxy.
postfetch_cols()¶ Return
postfetch_cols()
from the underlyingExecutionContext
.See
ExecutionContext
for details.Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an insert() or update() construct.
-
method
sqlalchemy.engine.ResultProxy.
prefetch_cols()¶ Return
prefetch_cols()
from the underlyingExecutionContext
.See
ExecutionContext
for details.Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an insert() or update() construct.
-
attribute
sqlalchemy.engine.ResultProxy.
returned_defaults¶ Return the values of default columns that were fetched using the
ValuesBase.return_defaults()
feature.The value is an instance of
RowProxy
, orNone
ifValuesBase.return_defaults()
was not used or if the backend does not support RETURNING.New in version 0.9.0.
See also
-
attribute
sqlalchemy.engine.ResultProxy.
returns_rows¶ True if this
ResultProxy
returns rows.I.e. if it is legal to call the methods
ResultProxy.fetchone()
,ResultProxy.fetchmany()
ResultProxy.fetchall()
.
-
attribute
sqlalchemy.engine.ResultProxy.
rowcount¶ Return the ‘rowcount’ for this result.
The ‘rowcount’ reports the number of rows matched by the WHERE criterion of an UPDATE or DELETE statement.
Note
Notes regarding
ResultProxy.rowcount
:This attribute returns the number of rows matched, which is not necessarily the same as the number of rows that were actually modified - an UPDATE statement, for example, may have no net change on a given row if the SET values given are the same as those present in the row already. Such a row would be matched but not modified. On backends that feature both styles, such as MySQL, rowcount is configured by default to return the match count in all cases.
ResultProxy.rowcount
is only useful in conjunction with an UPDATE or DELETE statement. Contrary to what the Python DBAPI says, it does not return the number of rows available from the results of a SELECT statement as DBAPIs cannot support this functionality when rows are unbuffered.ResultProxy.rowcount
may not be fully implemented by all dialects. In particular, most DBAPIs do not support an aggregate rowcount result from an executemany call. TheResultProxy.supports_sane_rowcount()
andResultProxy.supports_sane_multi_rowcount()
methods will report from the dialect if each usage is known to be supported.Statements that use RETURNING may not return a correct rowcount.
-
method
sqlalchemy.engine.ResultProxy.
scalar()¶ Fetch the first column of the first row, and close the result set.
Returns None if no row is present.
After calling this method, the object is fully closed, e.g. the
ResultProxy.close()
method will have been called.
-
method
sqlalchemy.engine.ResultProxy.
supports_sane_multi_rowcount()¶ Return
supports_sane_multi_rowcount
from the dialect.See
ResultProxy.rowcount
for background.
-
method
sqlalchemy.engine.ResultProxy.
supports_sane_rowcount()¶ Return
supports_sane_rowcount
from the dialect.See
ResultProxy.rowcount
for background.
-
method
- class sqlalchemy.engine.RowProxy(parent, row, processors, keymap)¶
Proxy values from a single cursor row.
Mostly follows “ordered dictionary” behavior, mapping result values to the string-based column name, the integer position of the result in the row, as well as Column instances which can be mapped to the original Columns that produced this result set (for results that correspond to constructed SQL expressions).
Class signature
class
sqlalchemy.engine.RowProxy
(sqlalchemy.engine.BaseRowProxy
)-
method
sqlalchemy.engine.RowProxy.
has_key(key)¶ Return True if this RowProxy contains the given key.
-
method
sqlalchemy.engine.RowProxy.
items()¶ Return a list of tuples, each tuple containing a key/value pair.
-
method
sqlalchemy.engine.RowProxy.
keys()¶ Return the list of keys as strings represented by this RowProxy.
-
method
- class sqlalchemy.engine.Transaction(connection, parent)¶
Represent a database transaction in progress.
The
Transaction
object is procured by calling theConnection.begin()
method ofConnection
:from sqlalchemy import create_engine engine = create_engine("postgresql://scott:tiger@localhost/test") connection = engine.connect() trans = connection.begin() connection.execute("insert into x (a, b) values (1, 2)") trans.commit()
The object provides
rollback()
andcommit()
methods in order to control transaction boundaries. It also implements a context manager interface so that the Pythonwith
statement can be used with theConnection.begin()
method:with connection.begin(): connection.execute("insert into x (a, b) values (1, 2)")
The Transaction object is not threadsafe.
Members
-
method
sqlalchemy.engine.Transaction.
close()¶ Close this
Transaction
.If this transaction is the base transaction in a begin/commit nesting, the transaction will rollback(). Otherwise, the method returns.
This is used to cancel a Transaction without affecting the scope of an enclosing transaction.
-
method
sqlalchemy.engine.Transaction.
commit()¶ Commit this
Transaction
.
-
method
sqlalchemy.engine.Transaction.
rollback()¶ Roll back this
Transaction
.
-
method
- class sqlalchemy.engine.TwoPhaseTransaction(connection, xid)¶
Represent a two-phase transaction.
A new
TwoPhaseTransaction
object may be procured using theConnection.begin_twophase()
method.The interface is the same as that of
Transaction
with the addition of theprepare()
method.Members
Class signature
class
sqlalchemy.engine.TwoPhaseTransaction
(sqlalchemy.engine.Transaction
)-
method
sqlalchemy.engine.TwoPhaseTransaction.
prepare()¶ Prepare this
TwoPhaseTransaction
.After a PREPARE, the transaction can be committed.
-
method