Working with Engines and Connections¶
This section details direct usage of the Engine
,
Connection
, and related objects. Its important to note that when
using the SQLAlchemy ORM, these objects are not generally accessed; instead,
the Session
object is used as the interface to the database.
However, for applications that are built around direct usage of textual SQL
statements and/or SQL expression constructs without involvement by the ORM’s
higher level management services, the Engine
and
Connection
are king (and queen?) - read on.
Basic Usage¶
Recall from Engine Configuration that an Engine
is created via
the create_engine()
call:
engine = create_engine('mysql://scott:tiger@localhost/test')
The typical usage of create_engine()
is once per particular database
URL, held globally for the lifetime of a single application process. A single
Engine
manages many individual DBAPI connections on behalf of
the process and is intended to be called upon in a concurrent fashion. The
Engine
is not synonymous to the DBAPI connect
function, which
represents just one connection resource - the Engine
is most
efficient when created just once at the module level of an application, not
per-object or per-function call.
The most basic function of the Engine
is to provide access to a
Connection
, which can then invoke SQL statements. To emit
a textual statement to the database looks like:
with engine.connect() as connection:
result = connection.execute("select username from users")
for row in result:
print("username:", row['username'])
Above, the Engine.connect()
method returns a Connection
object, and by using it in a Python context manager (e.g. the with:
statement) the Connection.close()
method is automatically invoked at the
end of the block. The Connection
, is a proxy object for an
actual DBAPI connection. The DBAPI connection is retrieved from the connection
pool at the point at which Connection
is created.
The object returned is known as ResultProxy
, which
references a DBAPI cursor and provides methods for fetching rows
similar to that of the DBAPI cursor. The DBAPI cursor will be closed
by the ResultProxy
when all of its result rows (if any) are
exhausted. A ResultProxy
that returns no rows, such as that of
an UPDATE statement (without any returned rows),
releases cursor resources immediately upon construction.
When the Connection
is closed at the end of the with:
block, the
referenced DBAPI connection is released to the connection pool. From
the perspective of the database itself, the connection pool will not actually
“close” the connection assuming the pool has room to store this connection for
the next use. When the connection is returned to the pool for re-use, the
pooling mechanism issues a rollback()
call on the DBAPI connection so that
any transactional state or locks are removed, and the connection is ready for
its next use.
Our example above illustrated the execution of a textual SQL string.
The Connection.execute()
method can of course accommodate more than
that, including the variety of SQL expression constructs described
in SQL Expression Language Tutorial.
Using Transactions¶
Note
This section describes how to use transactions when working directly
with Engine
and Connection
objects. When using the
SQLAlchemy ORM, the public API for transaction control is via the
Session
object, which makes usage of the Transaction
object internally. See Managing Transactions for further
information.
The Connection
object provides a Connection.begin()
method which returns a Transaction
object. Like the Connection
itself, this object is usually used within a Python with:
block so
that its scope is managed:
with engine.connect() as connection:
with connection.begin():
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), {"col1": 7, "col2": "this is some data"})
The above block can be stated more simply by using the Engine.begin()
method of Engine
:
# runs a transaction
with engine.begin() as connection:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), {"col1": 7, "col2": "this is some data"})
The block managed by each .begin()
method has the behavior such that
the transaction is committed when the block completes. If an exception is
raised, the transaction is instead rolled back, and the exception propagated
outwards.
The underlying object used to represent the transaction is the
Transaction
object. This object is returned by the
Connection.begin()
method and includes the methods
Transaction.commit()
and Transaction.rollback()
. The context
manager calling form, which invokes these methods automatically, is recommended
as a best practice.
Nesting of Transaction Blocks¶
Deprecated since version 1.4: The “transaction nesting” feature of SQLAlchemy is a legacy feature that will be deprecated in the 1.4 release and no longer part of the 2.0 series of SQLAlchemy. The pattern has proven to be a little too awkward and complicated, unless an application makes more of a first-class framework around the behavior. See the following subsection Arbitrary Transaction Nesting as an Antipattern.
The Transaction
object also handles “nested” behavior by keeping
track of the outermost begin/commit pair. In this example, two functions both
issue a transaction on a Connection
, but only the outermost
Transaction
object actually takes effect when it is committed.
# method_a starts a transaction and calls method_b
def method_a(connection):
with connection.begin(): # open a transaction
method_b(connection)
# method_b also starts a transaction
def method_b(connection):
with connection.begin(): # open a transaction - this runs in the
# context of method_a's transaction
connection.execute("insert into mytable values ('bat', 'lala')")
connection.execute(mytable.insert(), {"col1": "bat", "col2": "lala"})
# open a Connection and call method_a
with engine.connect() as conn:
method_a(conn)
Above, method_a
is called first, which calls connection.begin()
. Then
it calls method_b
. When method_b
calls connection.begin()
, it just
increments a counter that is decremented when it calls commit()
. If either
method_a
or method_b
calls rollback()
, the whole transaction is
rolled back. The transaction is not committed until method_a
calls the
commit()
method. This “nesting” behavior allows the creation of functions
which “guarantee” that a transaction will be used if one was not already
available, but will automatically participate in an enclosing transaction if
one exists.
Arbitrary Transaction Nesting as an Antipattern¶
With many years of experience, the above “nesting” pattern has not proven to be very popular, and where it has been observed in large projects such as Openstack, it tends to be complicated.
The most ideal way to organize an application would have a single, or at least very few, points at which the “beginning” and “commit” of all database transactions is demarcated. This is also the general idea discussed in terms of the ORM at When do I construct a Session, when do I commit it, and when do I close it?. To adapt the example from the previous section to this practice looks like:
# method_a calls method_b
def method_a(connection):
method_b(connection)
# method_b uses the connection and assumes the transaction
# is external
def method_b(connection):
connection.execute(text("insert into mytable values ('bat', 'lala')"))
connection.execute(mytable.insert(), {"col1": "bat", "col2": "lala"})
# open a Connection inside of a transaction and call method_a
with engine.begin() as conn:
method_a(conn)
That is, method_a()
and method_b()
do not deal with the details
of the transaction at all; the transactional scope of the connection is
defined externally to the functions that have a SQL dialogue with the
connection.
It may be observed that the above code has fewer lines, and less indentation
which tends to correlate with lower cyclomatic complexity. The
above code is organized such that method_a()
and method_b()
are always
invoked from a point at which a transaction is begun. The previous
version of the example features a method_a()
and a method_b()
that are
trying to be agnostic of this fact, which suggests they are prepared for
at least twice as many potential codepaths through them.
Migrating from the “nesting” pattern¶
As SQLAlchemy’s intrinsic-nested pattern is considered legacy, an application that for either legacy or novel reasons still seeks to have a context that automatically frames transactions should seek to maintain this functionality through the use of a custom Python context manager. A similar example is also provided in terms of the ORM in the “seealso” section below.
To provide backwards compatibility for applications that make use of this pattern, the following context manager or a similar implementation based on a decorator may be used:
import contextlib
@contextlib.contextmanager
def transaction(connection):
if not connection.in_transaction():
with connection.begin():
yield connection
else:
yield connection
The above contextmanager would be used as:
# method_a starts a transaction and calls method_b
def method_a(connection):
with transaction(connection): # open a transaction
method_b(connection)
# method_b either starts a transaction, or uses the one already
# present
def method_b(connection):
with transaction(connection): # open a transaction
connection.execute(text("insert into mytable values ('bat', 'lala')"))
connection.execute(mytable.insert(), {"col1": "bat", "col2": "lala"})
# open a Connection and call method_a
with engine.connect() as conn:
method_a(conn)
A similar approach may be taken such that connectivity is established on demand as well; the below approach features a single-use context manager that accesses an enclosing state in order to test if connectivity is already present:
import contextlib
def connectivity(engine):
connection = None
@contextlib.contextmanager
def connect():
nonlocal connection
if connection is None:
connection = engine.connect()
with connection:
with connection.begin():
yield connection
else:
yield connection
return connect
Using the above would look like:
# method_a passes along connectivity context, at the same time
# it chooses to establish a connection by calling "with"
def method_a(connectivity):
with connectivity():
method_b(connectivity)
# method_b also wants to use a connection from the context, so it
# also calls "with:", but also it actually uses the connection.
def method_b(connectivity):
with connectivity() as connection:
connection.execute(text("insert into mytable values ('bat', 'lala')"))
connection.execute(mytable.insert(), {"col1": "bat", "col2": "lala"})
# create a new connection/transaction context object and call
# method_a
method_a(connectivity(engine))
The above context manager acts not only as a “transaction” context but also
as a context that manages having an open connection against a particular
Engine
. When using the ORM Session
, this
connectivty management is provided by the Session
itself.
An overview of ORM connectivity patterns is at Managing Transactions.
See also
Migrating from the “subtransaction” pattern - ORM version
Library Level (e.g. emulated) Autocommit¶
Note
The “autocommit” feature of SQLAlchemy is a legacy feature that will be deprecated in an upcoming release. New usage paradigms will eliminate the need for it to be present.
Note
This section discusses the feature within SQLAlchemy that automatically
invokes the .commit()
method on a DBAPI connection, however this is against
a DBAPI connection that is itself transactional. For true AUTOCOMMIT,
see the next section Setting Transaction Isolation Levels including DBAPI Autocommit.
The previous transaction example illustrates how to use Transaction
so that several executions can take part in the same transaction. What happens
when we issue an INSERT, UPDATE or DELETE call without using
Transaction
? While some DBAPI
implementations provide various special “non-transactional” modes, the core
behavior of DBAPI per PEP-0249 is that a transaction is always in progress,
providing only rollback()
and commit()
methods but no begin()
.
SQLAlchemy assumes this is the case for any given DBAPI.
Given this requirement, SQLAlchemy implements its own “autocommit” feature which
works completely consistently across all backends. This is achieved by
detecting statements which represent data-changing operations, i.e. INSERT,
UPDATE, DELETE, as well as data definition language (DDL) statements such as
CREATE TABLE, ALTER TABLE, and then issuing a COMMIT automatically if no
transaction is in progress. The detection is based on the presence of the
autocommit=True
execution option on the statement. If the statement
is a text-only statement and the flag is not set, a regular expression is used
to detect INSERT, UPDATE, DELETE, as well as a variety of other commands
for a particular backend:
conn = engine.connect()
conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
The “autocommit” feature is only in effect when no Transaction
has
otherwise been declared. This means the feature is not generally used with
the ORM, as the Session
object by default always maintains an
ongoing Transaction
.
Full control of the “autocommit” behavior is available using the generative
Connection.execution_options()
method provided on Connection
and Engine
, using the “autocommit” flag which will
turn on or off the autocommit for the selected scope. For example, a
text()
construct representing a stored procedure that commits might use
it so that a SELECT statement will issue a COMMIT:
with engine.connect().execution_options(autocommit=True) as conn:
conn.execute(text("SELECT my_mutating_procedure()"))
Setting Transaction Isolation Levels including DBAPI Autocommit¶
Most DBAPIs support the concept of configurable transaction isolation levels. These are traditionally the four levels “READ UNCOMMITTED”, “READ COMMITTED”, “REPEATABLE READ” and “SERIALIZABLE”. These are usually applied to a DBAPI connection before it begins a new transaction, noting that most DBAPIs will begin this transaction implicitly when SQL statements are first emitted.
DBAPIs that support isolation levels also usually support the concept of true
“autocommit”, which means that the DBAPI connection itself will be placed into
a non-transactional autocommit mode. This usually means that the typical
DBAPI behavior of emitting “BEGIN” to the database automatically no longer
occurs, but it may also include other directives. When using this mode,
the DBAPI does not use a transaction under any circumstances. SQLAlchemy
methods like .begin()
, .commit()
and .rollback()
pass silently
and have no effect.
Instead, each statement invoked upon the connection will commit any changes automatically; it sometimes also means that the connection itself will use fewer server-side database resources. For this reason and others, “autocommit” mode is often desirable for non-transactional applications that need to read individual tables or rows outside the scope of a true ACID transaction.
SQLAlchemy dialects should support these isolation levels as well as autocommit
to as great a degree as possible. The levels are set via family of
“execution_options” parameters and methods that are throughout the Core, such
as the Connection.execution_options()
method. The parameter is
known as Connection.execution_options.isolation_level
and
the values are strings which are typically a subset of the following names:
# possible values for Connection.execution_options(isolation_level="<value>")
"AUTOCOMMIT"
"READ COMMITTED"
"READ UNCOMMITTED"
"REPEATABLE READ"
"SERIALIZABLE"
Not every DBAPI supports every value; if an unsupported value is used for a certain backend, an error is raised.
For example, to force REPEATABLE READ on a specific connection, then begin a transaction:
with engine.connect().execution_options(isolation_level="REPEATABLE READ") as connection:
with connection.begin():
connection.execute(<statement>)
The Connection.execution_options.isolation_level
option may
also be set engine wide, as is often preferable. This is achieved by
passing it within the create_engine.execution_options
parameter to create_engine()
:
from sqlalchemy import create_engine
eng = create_engine(
"postgresql://scott:tiger@localhost/test",
execution_options={
"isolation_level": "REPEATABLE READ"
}
)
With the above setting, the DBAPI connection will be set to use a
"REPEATABLE READ"
isolation level setting for each new transaction
begun.
An application that frequently chooses to run operations within different
isolation levels may wish to create multiple “sub-engines” of a lead
Engine
, each of which will be configured to a different
isolation level. One such use case is an application that has operations
that break into “transactional” and “read-only” operations, a separate
Engine
that makes use of "AUTOCOMMIT"
may be
separated off from the main engine:
from sqlalchemy import create_engine
eng = create_engine("postgresql://scott:tiger@localhost/test")
autocommit_engine = eng.execution_options(isolation_level="AUTOCOMMIT")
Above, the Engine.execution_options()
method creates a shallow
copy of the original Engine
. Both eng
and
autocommit_engine
share the same dialect and connection pool. However, the
“AUTOCOMMIT” mode will be set upon connections when they are acquired from the
autocommit_engine
.
The isolation level setting, regardless of which one it is, is unconditionally reverted when a connection is returned to the connection pool.
Note
The Connection.execution_options.isolation_level
parameter necessarily does not apply to statement level options, such as
that of Executable.execution_options()
. This because the option
must be set on a DBAPI connection on a per-transaction basis.
Connectionless Execution, Implicit Execution¶
Note
“Connectionless” and “implicit” execution are legacy SQLAlchemy features that will be deprecated in an upcoming release.
Recall from the first section we mentioned executing with and without explicit
usage of Connection
. “Connectionless” execution
refers to the usage of the execute()
method on an object which is not a
Connection
. This was illustrated using the
Engine.execute()
method of Engine
:
result = engine.execute("select username from users")
for row in result:
print("username:", row['username'])
In addition to “connectionless” execution, it is also possible
to use the Executable.execute()
method of
any Executable
construct, which is a marker for SQL expression objects
that support execution. The SQL expression object itself references an
Engine
or Connection
known as the bind, which it uses
in order to provide so-called “implicit” execution services.
Given a table as below:
from sqlalchemy import MetaData, Table, Column, Integer
meta = MetaData()
users_table = Table('users', meta,
Column('id', Integer, primary_key=True),
Column('name', String(50))
)
Explicit execution delivers the SQL text or constructed SQL expression to the
Connection.execute()
method of Connection
:
engine = create_engine('sqlite:///file.db')
with engine.connect() as connection:
result = connection.execute(users_table.select())
for row in result:
# ....
Explicit, connectionless execution delivers the expression to the
Engine.execute()
method of Engine
:
engine = create_engine('sqlite:///file.db')
result = engine.execute(users_table.select())
for row in result:
# ....
result.close()
Implicit execution is also connectionless, and makes usage of the Executable.execute()
method
on the expression itself. This method is provided as part of the
Executable
class, which refers to a SQL statement that is sufficient
for being invoked against the database. The method makes usage of
the assumption that either an
Engine
or
Connection
has been bound to the expression
object. By “bound” we mean that the special attribute MetaData.bind
has been used to associate a series of
Table
objects and all SQL constructs derived from them with a specific
engine:
engine = create_engine('sqlite:///file.db')
meta.bind = engine
result = users_table.select().execute()
for row in result:
# ....
result.close()
Above, we associate an Engine
with a MetaData
object using
the special attribute MetaData.bind
. The select()
construct produced
from the Table
object has a method Executable.execute()
, which will
search for an Engine
that’s “bound” to the Table
.
Overall, the usage of “bound metadata” has three general effects:
SQL statement objects gain an
Executable.execute()
method which automatically locates a “bind” with which to execute themselves.The ORM
Session
object supports using “bound metadata” in order to establish whichEngine
should be used to invoke SQL statements on behalf of a particular mapped class, though theSession
also features its own explicit system of establishing complexEngine
/ mapped class configurations.The
MetaData.create_all()
,MetaData.drop_all()
,Table.create()
,Table.drop()
, and “autoload” features all make usage of the boundEngine
automatically without the need to pass it explicitly.
Note
The concepts of “bound metadata” and “implicit execution” are not emphasized in modern SQLAlchemy. While they offer some convenience, they are no longer required by any API and are never necessary.
In applications where multiple Engine
objects are present, each one logically associated
with a certain set of tables (i.e. vertical sharding), the “bound metadata” technique can be used
so that individual Table
can refer to the appropriate Engine
automatically;
in particular this is supported within the ORM via the Session
object
as a means to associate Table
objects with an appropriate Engine
,
as an alternative to using the bind arguments accepted directly by the Session
.
However, the “implicit execution” technique is not at all appropriate for use with the
ORM, as it bypasses the transactional context maintained by the Session
.
Overall, in the vast majority of cases, “bound metadata” and “implicit execution” are not useful. While “bound metadata” has a marginal level of usefulness with regards to ORM configuration, “implicit execution” is a very old usage pattern that in most cases is more confusing than it is helpful, and its usage is discouraged. Both patterns seem to encourage the overuse of expedient “short cuts” in application design which lead to problems later on.
Modern SQLAlchemy usage, especially the ORM, places a heavy stress on working within the context
of a transaction at all times; the “implicit execution” concept makes the job of
associating statement execution with a particular transaction much more difficult.
The Executable.execute()
method on a particular SQL statement
usually implies that the execution is not part of any particular transaction, which is
usually not the desired effect.
In both “connectionless” examples, the
Connection
is created behind the scenes; the
ResultProxy
returned by the execute()
call references the Connection
used to issue
the SQL statement. When the ResultProxy
is closed, the underlying
Connection
is closed for us, resulting in the
DBAPI connection being returned to the pool with transactional resources removed.
Translation of Schema Names¶
To support multi-tenancy applications that distribute common sets of tables
into multiple schemas, the
Connection.execution_options.schema_translate_map
execution option may be used to repurpose a set of Table
objects
to render under different schema names without any changes.
Given a table:
user_table = Table(
'user', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(50))
)
The “schema” of this Table
as defined by the
Table.schema
attribute is None
. The
Connection.execution_options.schema_translate_map
can specify
that all Table
objects with a schema of None
would instead
render the schema as user_schema_one
:
connection = engine.connect().execution_options(
schema_translate_map={None: "user_schema_one"})
result = connection.execute(user_table.select())
The above code will invoke SQL on the database of the form:
SELECT user_schema_one.user.id, user_schema_one.user.name FROM
user_schema_one.user
That is, the schema name is substituted with our translated name. The map can specify any number of target->destination schemas:
connection = engine.connect().execution_options(
schema_translate_map={
None: "user_schema_one", # no schema name -> "user_schema_one"
"special": "special_schema", # schema="special" becomes "special_schema"
"public": None # Table objects with schema="public" will render with no schema
})
The Connection.execution_options.schema_translate_map
parameter
affects all DDL and SQL constructs generated from the SQL expression language,
as derived from the Table
or Sequence
objects.
It does not impact literal string SQL used via the text()
construct nor via plain strings passed to Connection.execute()
.
The feature takes effect only in those cases where the name of the
schema is derived directly from that of a Table
or Sequence
;
it does not impact methods where a string schema name is passed directly.
By this pattern, it takes effect within the “can create” / “can drop” checks
performed by methods such as MetaData.create_all()
or
MetaData.drop_all()
are called, and it takes effect when
using table reflection given a Table
object. However it does
not affect the operations present on the Inspector
object,
as the schema name is passed to these methods explicitly.
Tip
To use the schema translation feature with the ORM Session
,
set this option at the level of the Engine
, then pass that engine
to the Session
. The Session
uses a new
Connection
for each transaction:
schema_engine = engine.execution_options(schema_translate_map = { ... } )
session = Session(schema_engine)
...
New in version 1.1.
Engine Disposal¶
The Engine
refers to a connection pool, which means under normal
circumstances, there are open database connections present while the
Engine
object is still resident in memory. When an Engine
is garbage collected, its connection pool is no longer referred to by
that Engine
, and assuming none of its connections are still checked
out, the pool and its connections will also be garbage collected, which has the
effect of closing out the actual database connections as well. But otherwise,
the Engine
will hold onto open database connections assuming
it uses the normally default pool implementation of QueuePool
.
The Engine
is intended to normally be a permanent
fixture established up-front and maintained throughout the lifespan of an
application. It is not intended to be created and disposed on a
per-connection basis; it is instead a registry that maintains both a pool
of connections as well as configurational information about the database
and DBAPI in use, as well as some degree of internal caching of per-database
resources.
However, there are many cases where it is desirable that all connection resources
referred to by the Engine
be completely closed out. It’s
generally not a good idea to rely on Python garbage collection for this
to occur for these cases; instead, the Engine
can be explicitly disposed using
the Engine.dispose()
method. This disposes of the engine’s
underlying connection pool and replaces it with a new one that’s empty.
Provided that the Engine
is discarded at this point and no longer used, all checked-in connections
which it refers to will also be fully closed.
Valid use cases for calling Engine.dispose()
include:
When a program wants to release any remaining checked-in connections held by the connection pool and expects to no longer be connected to that database at all for any future operations.
When a program uses multiprocessing or
fork()
, and anEngine
object is copied to the child process,Engine.dispose()
should be called so that the engine creates brand new database connections local to that fork. Database connections generally do not travel across process boundaries.Within test suites or multitenancy scenarios where many ad-hoc, short-lived
Engine
objects may be created and disposed.
Connections that are checked out are not discarded when the
engine is disposed or garbage collected, as these connections are still
strongly referenced elsewhere by the application.
However, after Engine.dispose()
is called, those
connections are no longer associated with that Engine
; when they
are closed, they will be returned to their now-orphaned connection pool
which will ultimately be garbage collected, once all connections which refer
to it are also no longer referenced anywhere.
Since this process is not easy to control, it is strongly recommended that
Engine.dispose()
is called only after all checked out connections
are checked in or otherwise de-associated from their pool.
An alternative for applications that are negatively impacted by the
Engine
object’s use of connection pooling is to disable pooling
entirely. This typically incurs only a modest performance impact upon the
use of new connections, and means that when a connection is checked in,
it is entirely closed out and is not held in memory. See Switching Pool Implementations
for guidelines on how to disable pooling.
Using the Threadlocal Execution Strategy¶
The “threadlocal” engine strategy is an optional feature which
can be used by non-ORM applications to associate transactions
with the current thread, such that all parts of the
application can participate in that transaction implicitly without the need to
explicitly reference a Connection
.
Deprecated since version 1.3: The “threadlocal” engine strategy is deprecated, and will be removed in a future release.
This strategy is designed for a particular pattern of usage which is
generally considered as a legacy pattern. It has no impact on the
“thread safety” of SQLAlchemy components or one’s application. It also
should not be used when using an ORM
Session
object, as the
Session
itself represents an ongoing
transaction and itself handles the job of maintaining connection and
transactional resources.
Enabling threadlocal
is achieved as follows:
db = create_engine('mysql://localhost/test', strategy='threadlocal')
The above Engine
will now acquire a Connection
using
connection resources derived from a thread-local variable whenever
Engine.execute()
or Engine.contextual_connect()
is called. This
connection resource is maintained as long as it is referenced, which allows
multiple points of an application to share a transaction while using
connectionless execution:
def call_operation1():
engine.execute("insert into users values (?, ?)", 1, "john")
def call_operation2():
users.update(users.c.user_id==5).execute(name='ed')
db.begin()
try:
call_operation1()
call_operation2()
db.commit()
except:
db.rollback()
Explicit execution can be mixed with connectionless execution by
using the Engine.connect()
method to acquire a Connection
that is not part of the threadlocal scope:
db.begin()
conn = db.connect()
try:
conn.execute(log_table.insert(), message="Operation started")
call_operation1()
call_operation2()
db.commit()
conn.execute(log_table.insert(), message="Operation succeeded")
except:
db.rollback()
conn.execute(log_table.insert(), message="Operation failed")
finally:
conn.close()
To access the Connection
that is bound to the threadlocal scope,
call Engine.contextual_connect()
:
conn = db.contextual_connect()
call_operation3(conn)
conn.close()
Calling Connection.close()
on the “contextual” connection does not release
its resources until all other usages of that resource are closed as well, including
that any ongoing transactions are rolled back or committed.
Working with Raw DBAPI Connections¶
There are some cases where SQLAlchemy does not provide a genericized way at accessing some DBAPI functions, such as calling stored procedures as well as dealing with multiple result sets. In these cases, it’s just as expedient to deal with the raw DBAPI connection directly.
The most common way to access the raw DBAPI connection is to get it
from an already present Connection
object directly. It is
present using the Connection.connection
attribute:
connection = engine.connect()
dbapi_conn = connection.connection
The DBAPI connection here is actually a “proxied” in terms of the
originating connection pool, however this is an implementation detail
that in most cases can be ignored. As this DBAPI connection is still
contained within the scope of an owning Connection
object, it is
best to make use of the Connection
object for most features such
as transaction control as well as calling the Connection.close()
method; if these operations are performed on the DBAPI connection directly,
the owning Connection
will not be aware of these changes in state.
To overcome the limitations imposed by the DBAPI connection that is
maintained by an owning Connection
, a DBAPI connection is also
available without the need to procure a
Connection
first, using the Engine.raw_connection()
method
of Engine
:
dbapi_conn = engine.raw_connection()
This DBAPI connection is again a “proxied” form as was the case before.
The purpose of this proxying is now apparent, as when we call the .close()
method of this connection, the DBAPI connection is typically not actually
closed, but instead released back to the
engine’s connection pool:
dbapi_conn.close()
While SQLAlchemy may in the future add built-in patterns for more DBAPI use cases, there are diminishing returns as these cases tend to be rarely needed and they also vary highly dependent on the type of DBAPI in use, so in any case the direct DBAPI calling pattern is always there for those cases where it is needed.
Some recipes for DBAPI connection use follow.
Calling Stored Procedures¶
For stored procedures with special syntactical or parameter concerns, DBAPI-level callproc may be used:
connection = engine.raw_connection()
try:
cursor = connection.cursor()
cursor.callproc("my_procedure", ['x', 'y', 'z'])
results = list(cursor.fetchall())
cursor.close()
connection.commit()
finally:
connection.close()
Multiple Result Sets¶
Multiple result set support is available from a raw DBAPI cursor using the nextset method:
connection = engine.raw_connection()
try:
cursor = connection.cursor()
cursor.execute("select * from table1; select * from table2")
results_one = cursor.fetchall()
cursor.nextset()
results_two = cursor.fetchall()
cursor.close()
finally:
connection.close()
Registering New Dialects¶
The create_engine()
function call locates the given dialect
using setuptools entrypoints. These entry points can be established
for third party dialects within the setup.py script. For example,
to create a new dialect “foodialect://”, the steps are as follows:
Create a package called
foodialect
.The package should have a module containing the dialect class, which is typically a subclass of
sqlalchemy.engine.default.DefaultDialect
. In this example let’s say it’s calledFooDialect
and its module is accessed viafoodialect.dialect
.The entry point can be established in setup.py as follows:
entry_points=""" [sqlalchemy.dialects] foodialect = foodialect.dialect:FooDialect """
If the dialect is providing support for a particular DBAPI on top of
an existing SQLAlchemy-supported database, the name can be given
including a database-qualification. For example, if FooDialect
were in fact a MySQL dialect, the entry point could be established like this:
entry_points="""
[sqlalchemy.dialects]
mysql.foodialect = foodialect.dialect:FooDialect
"""
The above entrypoint would then be accessed as create_engine("mysql+foodialect://")
.
Registering Dialects In-Process¶
SQLAlchemy also allows a dialect to be registered within the current process, bypassing
the need for separate installation. Use the register()
function as follows:
from sqlalchemy.dialects import registry
registry.register("mysql.foodialect", "myapp.dialect", "MyMySQLDialect")
The above will respond to create_engine("mysql+foodialect://")
and load the
MyMySQLDialect
class from the myapp.dialect
module.
Connection / Engine API¶
Object Name | Description |
---|---|
Interface for an object which supports execution of SQL constructs. |
|
Provides high-level functionality for a wrapped DB-API connection. |
|
A set of hooks intended to augment the construction of an
|
|
Connects a |
|
Encapsulate information about an error condition in progress. |
|
Represent a ‘nested’, or SAVEPOINT transaction. |
|
A facade around a DBAPI cursor object. |
|
Represent a single result row. |
|
Represent a database transaction in progress. |
|
Represent a two-phase transaction. |
- class sqlalchemy.engine.Connection(engine, connection=None, close_with_result=False, _branch_from=None, _execution_options=None, _dispatch=None, _has_events=None)¶
Provides high-level functionality for a wrapped DB-API connection.
Provides execution support for string-based SQL statements as well as
ClauseElement
,Compiled
andDefaultGenerator
objects. Provides abegin()
method to returnTransaction
objects.The Connection object is not thread-safe. While a Connection can be shared among threads using properly synchronized access, it is still possible that the underlying DBAPI connection may not support shared access between threads. Check the DBAPI documentation for details.
Members
__init__(), begin(), begin_nested(), begin_twophase(), close(), closed, connect(), connection, default_isolation_level, detach(), execute(), execution_options(), get_execution_options(), get_isolation_level(), in_transaction(), info, invalidate(), invalidated, run_callable(), scalar(), schema_for_object, transaction()
The Connection object represents a single DBAPI connection checked out from the connection pool. In this state, the connection pool has no affect upon the connection, including its expiration or timeout state. For the connection pool to properly manage connections, connections should be returned to the connection pool (i.e.
connection.close()
) whenever the connection is not in use.Class signature
class
sqlalchemy.engine.Connection
(sqlalchemy.engine.Connectable
)-
method
sqlalchemy.engine.Connection.
__init__(engine, connection=None, close_with_result=False, _branch_from=None, _execution_options=None, _dispatch=None, _has_events=None)¶ Construct a new Connection.
The constructor here is not public and is only called only by an
Engine
. SeeEngine.connect()
andEngine.contextual_connect()
methods.
-
method
sqlalchemy.engine.Connection.
begin()¶ Begin a transaction and return a transaction handle.
The returned object is an instance of
Transaction
. This object represents the “scope” of the transaction, which completes when either theTransaction.rollback()
orTransaction.commit()
method is called.Nested calls to
begin()
on the sameConnection
will return newTransaction
objects that represent an emulated transaction within the scope of the enclosing transaction, that is:trans = conn.begin() # outermost transaction trans2 = conn.begin() # "nested" trans2.commit() # does nothing trans.commit() # actually commits
Calls to
Transaction.commit()
only have an effect when invoked via the outermostTransaction
object, though theTransaction.rollback()
method of any of theTransaction
objects will roll back the transaction.See also
Connection.begin_nested()
- use a SAVEPOINTConnection.begin_twophase()
- use a two phase /XID transactionEngine.begin()
- context manager available fromEngine
-
method
sqlalchemy.engine.Connection.
begin_nested()¶ Begin a nested transaction and return a transaction handle.
The returned object is an instance of
NestedTransaction
.Nested transactions require SAVEPOINT support in the underlying database. Any transaction in the hierarchy may
commit
androllback
, however the outermost transaction still controls the overallcommit
orrollback
of the transaction of a whole.
-
method
sqlalchemy.engine.Connection.
begin_twophase(xid=None)¶ Begin a two-phase or XA transaction and return a transaction handle.
The returned object is an instance of
TwoPhaseTransaction
, which in addition to the methods provided byTransaction
, also provides aTwoPhaseTransaction.prepare()
method.- Parameters:
xid – the two phase transaction id. If not supplied, a random id will be generated.
-
method
sqlalchemy.engine.Connection.
close()¶ Close this
Connection
.This results in a release of the underlying database resources, that is, the DBAPI connection referenced internally. The DBAPI connection is typically restored back to the connection-holding
Pool
referenced by theEngine
that produced thisConnection
. Any transactional state present on the DBAPI connection is also unconditionally released via the DBAPI connection’srollback()
method, regardless of anyTransaction
object that may be outstanding with regards to thisConnection
.After
Connection.close()
is called, theConnection
is permanently in a closed state, and will allow no further operations.
-
attribute
sqlalchemy.engine.Connection.
closed¶ Return True if this connection is closed.
-
method
sqlalchemy.engine.Connection.
connect()¶ Returns a branched version of this
Connection
.The
Connection.close()
method on the returnedConnection
can be called and thisConnection
will remain open.This method provides usage symmetry with
Engine.connect()
, including for usage with context managers.
-
attribute
sqlalchemy.engine.Connection.
connection¶ The underlying DB-API connection managed by this Connection.
See also
-
attribute
sqlalchemy.engine.Connection.
default_isolation_level¶ The default isolation level assigned to this
Connection
.This is the isolation level setting that the
Connection
has when first procured via theEngine.connect()
method. This level stays in place until theConnection.execution_options.isolation_level
is used to change the setting on a per-Connection
basis.Unlike
Connection.get_isolation_level()
, this attribute is set ahead of time from the first connection procured by the dialect, so SQL query is not invoked when this accessor is called.New in version 0.9.9.
See also
Connection.get_isolation_level()
- view current levelcreate_engine.isolation_level
- set perEngine
isolation levelConnection.execution_options.isolation_level
- set perConnection
isolation level
-
method
sqlalchemy.engine.Connection.
detach()¶ Detach the underlying DB-API connection from its connection pool.
E.g.:
with engine.connect() as conn: conn.detach() conn.execute("SET search_path TO schema1, schema2") # work with connection # connection is fully closed (since we used "with:", can # also call .close())
This
Connection
instance will remain usable. When closed (or exited from a context manager context as above), the DB-API connection will be literally closed and not returned to its originating pool.This method can be used to insulate the rest of an application from a modified state on a connection (such as a transaction isolation level or similar).
-
method
sqlalchemy.engine.Connection.
execute(object_, *multiparams, **params)¶ Executes a SQL statement construct and returns a
ResultProxy
.- Parameters:
object –
The statement to be executed. May be one of:
a plain string
any
ClauseElement
construct that is also a subclass ofExecutable
, such as aselect()
constructa
FunctionElement
, such as that generated byfunc
, will be automatically wrapped in a SELECT statement, which is then executed.a
DDLElement
objecta
DefaultGenerator
objecta
Compiled
object
*multiparams/**params –
represent bound parameter values to be used in the execution. Typically, the format is either a collection of one or more dictionaries passed to *multiparams:
conn.execute( table.insert(), {"id":1, "value":"v1"}, {"id":2, "value":"v2"} )
…or individual key/values interpreted by **params:
conn.execute( table.insert(), id=1, value="v1" )
In the case that a plain SQL string is passed, and the underlying DBAPI accepts positional bind parameters, a collection of tuples or individual values in *multiparams may be passed:
conn.execute( "INSERT INTO table (id, value) VALUES (?, ?)", (1, "v1"), (2, "v2") ) conn.execute( "INSERT INTO table (id, value) VALUES (?, ?)", 1, "v1" )
Note above, the usage of a question mark “?” or other symbol is contingent upon the “paramstyle” accepted by the DBAPI in use, which may be any of “qmark”, “named”, “pyformat”, “format”, “numeric”. See pep-249 for details on paramstyle.
To execute a textual SQL statement which uses bound parameters in a DBAPI-agnostic way, use the
text()
construct.
-
method
sqlalchemy.engine.Connection.
execution_options(**opt)¶ Set non-SQL options for the connection which take effect during execution.
The method returns a copy of this
Connection
which references the same underlying DBAPI connection, but also defines the given execution options which will take effect for a call toexecute()
. As the newConnection
references the same underlying resource, it’s usually a good idea to ensure that the copies will be discarded immediately, which is implicit if used as in:result = connection.execution_options(stream_results=True).\ execute(stmt)
Note that any key/value can be passed to
Connection.execution_options()
, and it will be stored in the_execution_options
dictionary of theConnection
. It is suitable for usage by end-user schemes to communicate with event listeners, for example.The keywords that are currently recognized by SQLAlchemy itself include all those listed under
Executable.execution_options()
, as well as others that are specific toConnection
.- Parameters:
autocommit –
Available on: Connection, statement. When True, a COMMIT will be invoked after execution when executed in ‘autocommit’ mode, i.e. when an explicit transaction is not begun on the connection. Note that this is library level, not DBAPI level autocommit. The DBAPI connection will remain in a real transaction unless the “AUTOCOMMIT” isolation level is used.
Deprecated since version 1.4: The library-level “autocommit” feature is being removed in favor of database driver “autocommit” which is now widely available. See the section Setting Transaction Isolation Levels including DBAPI Autocommit.
compiled_cache –
Available on: Connection. A dictionary where
Compiled
objects will be cached when theConnection
compiles a clause expression into aCompiled
object. It is the user’s responsibility to manage the size of this dictionary, which will have keys corresponding to the dialect, clause element, the column names within the VALUES or SET clause of an INSERT or UPDATE, as well as the “batch” mode for an INSERT or UPDATE statement. The format of this dictionary is not guaranteed to stay the same in future releases.Note that the ORM makes use of its own “compiled” caches for some operations, including flush operations. The caching used by the ORM internally supersedes a cache dictionary specified here.
isolation_level –
Available on:
Connection
.Set the transaction isolation level for the lifespan of this
Connection
object. Valid values include those string values accepted by thecreate_engine.isolation_level
parameter passed tocreate_engine()
. These levels are semi-database specific; see individual dialect documentation for valid levels.The isolation level option applies the isolation level by emitting statements on the DBAPI connection, and necessarily affects the original Connection object overall, not just the copy that is returned by the call to
Connection.execution_options()
method. The isolation level will remain at the given setting until the DBAPI connection itself is returned to the connection pool, i.e. theConnection.close()
method on the originalConnection
is called, where an event handler will emit additional statements on the DBAPI connection in order to revert the isolation level change.Warning
The
isolation_level
execution option should not be used when a transaction is already established, that is, theConnection.begin()
method or similar has been called. A database cannot change the isolation level on a transaction in progress, and different DBAPIs and/or SQLAlchemy dialects may implicitly roll back or commit the transaction, or not affect the connection at all.Note
The
isolation_level
execution option is implicitly reset if theConnection
is invalidated, e.g. via theConnection.invalidate()
method, or if a disconnection error occurs. The new connection produced after the invalidation will not have the isolation level re-applied to it automatically.See also
create_engine.isolation_level
- set perEngine
isolation levelConnection.get_isolation_level()
- view current levelPostgreSQL Transaction Isolation
SQL Server Transaction Isolation
Setting Transaction Isolation Levels / DBAPI AUTOCOMMIT - for the ORM
no_parameters – When
True
, if the final parameter list or dictionary is totally empty, will invoke the statement on the cursor ascursor.execute(statement)
, not passing the parameter collection at all. Some DBAPIs such as psycopg2 and mysql-python consider percent signs as significant only when parameters are present; this option allows code to generate SQL containing percent signs (and possibly other characters) that is neutral regarding whether it’s executed by the DBAPI or piped into a script that’s later invoked by command line tools.stream_results – Available on: Connection, statement. Indicate to the dialect that results should be “streamed” and not pre-buffered, if possible. This is a limitation of many DBAPIs. The flag is currently understood only by the psycopg2, mysqldb and pymysql dialects.
schema_translate_map –
Available on: Connection, Engine. A dictionary mapping schema names to schema names, that will be applied to the
Table.schema
element of eachTable
encountered when SQL or DDL expression elements are compiled into strings; the resulting schema name will be converted based on presence in the map of the original name.New in version 1.1.
See also
-
method
sqlalchemy.engine.Connection.
get_execution_options()¶ Get the non-SQL options which will take effect during execution.
New in version 1.3.
See also
-
method
sqlalchemy.engine.Connection.
get_isolation_level()¶ Return the current isolation level assigned to this
Connection
.This will typically be the default isolation level as determined by the dialect, unless if the
Connection.execution_options.isolation_level
feature has been used to alter the isolation level on a per-Connection
basis.This attribute will typically perform a live SQL operation in order to procure the current isolation level, so the value returned is the actual level on the underlying DBAPI connection regardless of how this state was set. Compare to the
Connection.default_isolation_level
accessor which returns the dialect-level setting without performing a SQL query.New in version 0.9.9.
See also
Connection.default_isolation_level
- view default levelcreate_engine.isolation_level
- set perEngine
isolation levelConnection.execution_options.isolation_level
- set perConnection
isolation level
-
method
sqlalchemy.engine.Connection.
in_transaction()¶ Return True if a transaction is in progress.
-
attribute
sqlalchemy.engine.Connection.
info¶ Info dictionary associated with the underlying DBAPI connection referred to by this
Connection
, allowing user-defined data to be associated with the connection.The data here will follow along with the DBAPI connection including after it is returned to the connection pool and used again in subsequent instances of
Connection
.
-
method
sqlalchemy.engine.Connection.
invalidate(exception=None)¶ Invalidate the underlying DBAPI connection associated with this
Connection
.The underlying DBAPI connection is literally closed (if possible), and is discarded. Its source connection pool will typically lazily create a new connection to replace it.
Upon the next use (where “use” typically means using the
Connection.execute()
method or similar), thisConnection
will attempt to procure a new DBAPI connection using the services of thePool
as a source of connectivity (e.g. a “reconnection”).If a transaction was in progress (e.g. the
Connection.begin()
method has been called) whenConnection.invalidate()
method is called, at the DBAPI level all state associated with this transaction is lost, as the DBAPI connection is closed. TheConnection
will not allow a reconnection to proceed until theTransaction
object is ended, by calling theTransaction.rollback()
method; until that point, any attempt at continuing to use theConnection
will raise anInvalidRequestError
. This is to prevent applications from accidentally continuing an ongoing transactional operations despite the fact that the transaction has been lost due to an invalidation.The
Connection.invalidate()
method, just like auto-invalidation, will at the connection pool level invoke thePoolEvents.invalidate()
event.See also
-
attribute
sqlalchemy.engine.Connection.
invalidated¶ Return True if this connection was invalidated.
-
method
sqlalchemy.engine.Connection.
run_callable(callable_, *args, **kwargs)¶ Given a callable object or function, execute it, passing a
Connection
as the first argument.The given *args and **kwargs are passed subsequent to the
Connection
argument.This function, along with
Engine.run_callable()
, allows a function to be run with aConnection
orEngine
object without the need to know which one is being dealt with.
-
method
sqlalchemy.engine.Connection.
scalar(object_, *multiparams, **params)¶ Executes and returns the first column of the first row.
The underlying result/cursor is closed after execution.
-
attribute
sqlalchemy.engine.Connection.
schema_for_object = <sqlalchemy.sql.schema._SchemaTranslateMap object>¶ Return the “.schema” attribute for an object.
Used for
Table
,Sequence
and similar objects, and takes into account theConnection.execution_options.schema_translate_map
parameter.New in version 1.1.
See also
-
method
sqlalchemy.engine.Connection.
transaction(callable_, *args, **kwargs)¶ Execute the given function within a transaction boundary.
The function is passed this
Connection
as the first argument, followed by the given *args and **kwargs, e.g.:def do_something(conn, x, y): conn.execute("some statement", {'x':x, 'y':y}) conn.transaction(do_something, 5, 10)
The operations inside the function are all invoked within the context of a single
Transaction
. Upon success, the transaction is committed. If an exception is raised, the transaction is rolled back before propagating the exception.Note
The
transaction()
method is superseded by the usage of the Pythonwith:
statement, which can be used withConnection.begin()
:with conn.begin(): conn.execute("some statement", {'x':5, 'y':10})
As well as with
Engine.begin()
:with engine.begin() as conn: conn.execute("some statement", {'x':5, 'y':10})
See also
Engine.begin()
- engine-level transactional contextEngine.transaction()
- engine-level version ofConnection.transaction()
-
method
- class sqlalchemy.engine.Connectable¶
Interface for an object which supports execution of SQL constructs.
The two implementations of
Connectable
areConnection
andEngine
.Connectable must also implement the ‘dialect’ member which references a
Dialect
instance.-
method
sqlalchemy.engine.Connectable.
connect(**kwargs)¶ Return a
Connection
object.Depending on context, this may be
self
if this object is already an instance ofConnection
, or a newly procuredConnection
if this object is an instance ofEngine
.
-
method
sqlalchemy.engine.Connectable.
contextual_connect(*arg, **kw)¶ Return a
Connection
object which may be part of an ongoing context.Deprecated since version 1.3: The
Engine.contextual_connect()
andConnection.contextual_connect()
methods are deprecated. This method is an artifact of the threadlocal engine strategy which is also to be deprecated. For explicit connections from anEngine
, use theEngine.connect()
method.Depending on context, this may be
self
if this object is already an instance ofConnection
, or a newly procuredConnection
if this object is an instance ofEngine
.
-
method
sqlalchemy.engine.Connectable.
create(entity, **kwargs)¶ Emit CREATE statements for the given schema entity.
Deprecated since version 0.7: The
Connectable.create()
method is deprecated and will be removed in a future release. Please use the.create()
method on specific schema objects to emit DDL sequences, includingTable.create()
,Index.create()
, andMetaData.create_all()
.
-
method
sqlalchemy.engine.Connectable.
drop(entity, **kwargs)¶ Emit DROP statements for the given schema entity.
Deprecated since version 0.7: The
Connectable.drop()
method is deprecated and will be removed in a future release. Please use the.drop()
method on specific schema objects to emit DDL sequences, includingTable.drop()
,Index.drop()
, andMetaData.drop_all()
.
-
attribute
sqlalchemy.engine.Connectable.
engine = None¶ The
Engine
instance referred to by thisConnectable
.May be
self
if this is already anEngine
.
-
method
sqlalchemy.engine.Connectable.
execute(object_, *multiparams, **params)¶ Executes the given construct and returns a
-
method
sqlalchemy.engine.Connectable.
scalar(object_, *multiparams, **params)¶ Executes and returns the first column of the first row.
The underlying cursor is closed after execution.
-
method
- class sqlalchemy.engine.CreateEnginePlugin(url, kwargs)¶
A set of hooks intended to augment the construction of an
Engine
object based on entrypoint names in a URL.The purpose of
CreateEnginePlugin
is to allow third-party systems to apply engine, pool and dialect level event listeners without the need for the target application to be modified; instead, the plugin names can be added to the database URL. Target applications forCreateEnginePlugin
include:connection and SQL performance tools, e.g. which use events to track number of checkouts and/or time spent with statements
connectivity plugins such as proxies
Plugins are registered using entry points in a similar way as that of dialects:
entry_points={ 'sqlalchemy.plugins': [ 'myplugin = myapp.plugins:MyPlugin' ]
A plugin that uses the above names would be invoked from a database URL as in:
from sqlalchemy import create_engine engine = create_engine( "mysql+pymysql://scott:tiger@localhost/test?plugin=myplugin")
Alternatively, the
plugins" argument may be passed as a list to :func:`_sa.create_engine
:engine = create_engine( "mysql+pymysql://scott:tiger@localhost/test", plugins=["myplugin"])
New in version 1.2.3: plugin names can also be specified to
create_engine()
as a listThe
plugin
argument supports multiple instances, so that a URL may specify multiple plugins; they are loaded in the order stated in the URL:engine = create_engine( "mysql+pymysql://scott:tiger@localhost/" "test?plugin=plugin_one&plugin=plugin_twp&plugin=plugin_three")
A plugin can receive additional arguments from the URL string as well as from the keyword arguments passed to
create_engine()
. TheURL
object and the keyword dictionary are passed to the constructor so that these arguments can be extracted from the url’sURL.query
collection as well as from the dictionary:class MyPlugin(CreateEnginePlugin): def __init__(self, url, kwargs): self.my_argument_one = url.query.pop('my_argument_one') self.my_argument_two = url.query.pop('my_argument_two') self.my_argument_three = kwargs.pop('my_argument_three', None)
Arguments like those illustrated above would be consumed from the following:
from sqlalchemy import create_engine engine = create_engine( "mysql+pymysql://scott:tiger@localhost/" "test?plugin=myplugin&my_argument_one=foo&my_argument_two=bar", my_argument_three='bat')
The URL and dictionary are used for subsequent setup of the engine as they are, so the plugin can modify their arguments in-place. Arguments that are only understood by the plugin should be popped or otherwise removed so that they aren’t interpreted as erroneous arguments afterwards.
When the engine creation process completes and produces the
Engine
object, it is again passed to the plugin via theCreateEnginePlugin.engine_created()
hook. In this hook, additional changes can be made to the engine, most typically involving setup of events (e.g. those defined in Core Events).New in version 1.1.
-
method
sqlalchemy.engine.CreateEnginePlugin.
__init__(url, kwargs)¶ Construct a new
CreateEnginePlugin
.The plugin object is instantiated individually for each call to
create_engine()
. A singleEngine
will be passed to theCreateEnginePlugin.engine_created()
method corresponding to this URL.- Parameters:
url – the
URL
object. The plugin should inspect what it needs here as well as remove its custom arguments from theURL.query
collection. The URL can be modified in-place in any other way as well.kwargs – The keyword arguments passed to
create_engine()
. The plugin can read and modify this dictionary in-place, to affect the ultimate arguments used to create the engine. It should remove its custom arguments from the dictionary as well.
-
method
sqlalchemy.engine.CreateEnginePlugin.
engine_created(engine)¶ Receive the
Engine
object when it is fully constructed.The plugin may make additional changes to the engine, such as registering engine or connection pool events.
-
method
sqlalchemy.engine.CreateEnginePlugin.
handle_dialect_kwargs(dialect_cls, dialect_args)¶ parse and modify dialect kwargs
-
method
sqlalchemy.engine.CreateEnginePlugin.
handle_pool_kwargs(pool_cls, pool_args)¶ parse and modify pool kwargs
- class sqlalchemy.engine.Engine(pool, dialect, url, logging_name=None, echo=None, proxy=None, execution_options=None, hide_parameters=False)¶
Connects a
Pool
andDialect
together to provide a source of database connectivity and behavior.An
Engine
object is instantiated publicly using thecreate_engine()
function.Members
begin(), connect(), contextual_connect(), dispose(), driver, engine, execute(), execution_options(), get_execution_options(), has_table(), name, raw_connection(), run_callable(), scalar(), schema_for_object, table_names(), transaction(), update_execution_options()
Class signature
class
sqlalchemy.engine.Engine
(sqlalchemy.engine.Connectable
,sqlalchemy.log.Identified
)-
method
sqlalchemy.engine.Engine.
begin(close_with_result=False)¶ Return a context manager delivering a
Connection
with aTransaction
established.E.g.:
with engine.begin() as conn: conn.execute("insert into table (x, y, z) values (1, 2, 3)") conn.execute("my_special_procedure(5)")
Upon successful operation, the
Transaction
is committed. If an error is raised, theTransaction
is rolled back.The
close_with_result
flag is normallyFalse
, and indicates that theConnection
will be closed when the operation is complete. When set toTrue
, it indicates theConnection
is in “single use” mode, where theResultProxy
returned by the first call toConnection.execute()
will close theConnection
when thatResultProxy
has exhausted all result rows.See also
Engine.connect()
- procure aConnection
from anEngine
.Connection.begin()
- start aTransaction
for a particularConnection
.
-
method
sqlalchemy.engine.Engine.
connect(**kwargs)¶ Return a new
Connection
object.The
Connection
object is a facade that uses a DBAPI connection internally in order to communicate with the database. This connection is procured from the connection-holdingPool
referenced by thisEngine
. When theConnection.close()
method of theConnection
object is called, the underlying DBAPI connection is then returned to the connection pool, where it may be used again in a subsequent call toEngine.connect()
.
-
method
sqlalchemy.engine.Engine.
contextual_connect(close_with_result=False, **kwargs)¶ Return a
Connection
object which may be part of some ongoing context.Deprecated since version 1.3: The
Engine.contextual_connect()
method is deprecated. This method is an artifact of the threadlocal engine strategy which is also to be deprecated. For explicit connections from anEngine
, use theEngine.connect()
method.By default, this method does the same thing as
Engine.connect()
. Subclasses ofEngine
may override this method to provide contextual behavior.- Parameters:
close_with_result – When True, the first
ResultProxy
created by theConnection
will call theConnection.close()
method of that connection as soon as any pending result rows are exhausted. This is used to supply the “connectionless execution” behavior provided by theEngine.execute()
method.
-
method
sqlalchemy.engine.Engine.
dispose()¶ Dispose of the connection pool used by this
Engine
.This has the effect of fully closing all currently checked in database connections. Connections that are still checked out will not be closed, however they will no longer be associated with this
Engine
, so when they are closed individually, eventually thePool
which they are associated with will be garbage collected and they will be closed out fully, if not already closed on checkin.A new connection pool is created immediately after the old one has been disposed. This new pool, like all SQLAlchemy connection pools, does not make any actual connections to the database until one is first requested, so as long as the
Engine
isn’t used again, no new connections will be made.See also
-
attribute
sqlalchemy.engine.Engine.
driver¶
-
attribute
sqlalchemy.engine.Engine.
engine¶ The
Engine
instance referred to by thisConnectable
.May be
self
if this is already anEngine
.
-
method
sqlalchemy.engine.Engine.
execute(statement, *multiparams, **params)¶ Executes the given construct and returns a
ResultProxy
.The arguments are the same as those used by
Connection.execute()
.Here, a
Connection
is acquired using theEngine.contextual_connect()
method, and the statement executed with that connection. The returnedResultProxy
is flagged such that when theResultProxy
is exhausted and its underlying cursor is closed, theConnection
created here will also be closed, which allows its associated DBAPI connection resource to be returned to the connection pool.
-
method
sqlalchemy.engine.Engine.
execution_options(**opt)¶ Return a new
Engine
that will provideConnection
objects with the given execution options.The returned
Engine
remains related to the originalEngine
in that it shares the same connection pool and other state:The
Pool
used by the newEngine
is the same instance. TheEngine.dispose()
method will replace the connection pool instance for the parent engine as well as this one.Event listeners are “cascaded” - meaning, the new
Engine
inherits the events of the parent, and new events can be associated with the newEngine
individually.The logging configuration and logging_name is copied from the parent
Engine
.
The intent of the
Engine.execution_options()
method is to implement “sharding” schemes where multipleEngine
objects refer to the same connection pool, but are differentiated by options that would be consumed by a custom event:primary_engine = create_engine("mysql://") shard1 = primary_engine.execution_options(shard_id="shard1") shard2 = primary_engine.execution_options(shard_id="shard2")
Above, the
shard1
engine serves as a factory forConnection
objects that will contain the execution optionshard_id=shard1
, andshard2
will produceConnection
objects that contain the execution optionshard_id=shard2
.An event handler can consume the above execution option to perform a schema switch or other operation, given a connection. Below we emit a MySQL
use
statement to switch databases, at the same time keeping track of which database we’ve established using theConnection.info
dictionary, which gives us a persistent storage space that follows the DBAPI connection:from sqlalchemy import event from sqlalchemy.engine import Engine shards = {"default": "base", shard_1: "db1", "shard_2": "db2"} @event.listens_for(Engine, "before_cursor_execute") def _switch_shard(conn, cursor, stmt, params, context, executemany): shard_id = conn._execution_options.get('shard_id', "default") current_shard = conn.info.get("current_shard", None) if current_shard != shard_id: cursor.execute("use %s" % shards[shard_id]) conn.info["current_shard"] = shard_id
See also
Connection.execution_options()
- update execution options on aConnection
object.Engine.update_execution_options()
- update the execution options for a givenEngine
in place.
-
method
sqlalchemy.engine.Engine.
get_execution_options()¶ Get the non-SQL options which will take effect during execution.
See also
-
method
sqlalchemy.engine.Engine.
has_table(table_name, schema=None)¶ Return True if the given backend has a table of the given name.
See also
Fine Grained Reflection with Inspector - detailed schema inspection using the
Inspector
interface.quoted_name
- used to pass quoting information along with a schema identifier.
-
attribute
sqlalchemy.engine.Engine.
name¶
-
method
sqlalchemy.engine.Engine.
raw_connection(_connection=None)¶ Return a “raw” DBAPI connection from the connection pool.
The returned object is a proxied version of the DBAPI connection object used by the underlying driver in use. The object will have all the same behavior as the real DBAPI connection, except that its
close()
method will result in the connection being returned to the pool, rather than being closed for real.This method provides direct DBAPI connection access for special situations when the API provided by
Connection
is not needed. When aConnection
object is already present, the DBAPI connection is available using theConnection.connection
accessor.See also
-
method
sqlalchemy.engine.Engine.
run_callable(callable_, *args, **kwargs)¶ Given a callable object or function, execute it, passing a
Connection
as the first argument.The given *args and **kwargs are passed subsequent to the
Connection
argument.This function, along with
Connection.run_callable()
, allows a function to be run with aConnection
orEngine
object without the need to know which one is being dealt with.
-
method
sqlalchemy.engine.Engine.
scalar(statement, *multiparams, **params)¶ Executes and returns the first column of the first row.
The underlying cursor is closed after execution.
-
attribute
sqlalchemy.engine.Engine.
schema_for_object = <sqlalchemy.sql.schema._SchemaTranslateMap object>¶ Return the “.schema” attribute for an object.
Used for
Table
,Sequence
and similar objects, and takes into account theConnection.execution_options.schema_translate_map
parameter.New in version 1.1.
See also
-
method
sqlalchemy.engine.Engine.
table_names(schema=None, connection=None)¶ Return a list of all table names available in the database.
- Parameters:
schema – Optional, retrieve names from a non-default schema.
connection – Optional, use a specified connection. Default is the
contextual_connect
for thisEngine
.
-
method
sqlalchemy.engine.Engine.
transaction(callable_, *args, **kwargs)¶ Execute the given function within a transaction boundary.
The function is passed a
Connection
newly procured fromEngine.contextual_connect()
as the first argument, followed by the given *args and **kwargs.e.g.:
def do_something(conn, x, y): conn.execute("some statement", {'x':x, 'y':y}) engine.transaction(do_something, 5, 10)
The operations inside the function are all invoked within the context of a single
Transaction
. Upon success, the transaction is committed. If an exception is raised, the transaction is rolled back before propagating the exception.Note
The
transaction()
method is superseded by the usage of the Pythonwith:
statement, which can be used withEngine.begin()
:with engine.begin() as conn: conn.execute("some statement", {'x':5, 'y':10})
See also
Engine.begin()
- engine-level transactional contextConnection.transaction()
- connection-level version ofEngine.transaction()
-
method
sqlalchemy.engine.Engine.
update_execution_options(**opt)¶ Update the default execution_options dictionary of this
Engine
.The given keys/values in **opt are added to the default execution options that will be used for all connections. The initial contents of this dictionary can be sent via the
execution_options
parameter tocreate_engine()
.
-
method
- class sqlalchemy.engine.ExceptionContext¶
Encapsulate information about an error condition in progress.
This object exists solely to be passed to the
ConnectionEvents.handle_error()
event, supporting an interface that can be extended without backwards-incompatibility.Members
chained_exception, connection, cursor, engine, execution_context, invalidate_pool_on_disconnect, is_disconnect, original_exception, parameters, sqlalchemy_exception, statement
New in version 0.9.7.
-
attribute
sqlalchemy.engine.ExceptionContext.
chained_exception = None¶ The exception that was returned by the previous handler in the exception chain, if any.
If present, this exception will be the one ultimately raised by SQLAlchemy unless a subsequent handler replaces it.
May be None.
-
attribute
sqlalchemy.engine.ExceptionContext.
connection = None¶ The
Connection
in use during the exception.This member is present, except in the case of a failure when first connecting.
See also
-
attribute
sqlalchemy.engine.ExceptionContext.
cursor = None¶ The DBAPI cursor object.
May be None.
-
attribute
sqlalchemy.engine.ExceptionContext.
engine = None¶ The
Engine
in use during the exception.This member should always be present, even in the case of a failure when first connecting.
New in version 1.0.0.
-
attribute
sqlalchemy.engine.ExceptionContext.
execution_context = None¶ The
ExecutionContext
corresponding to the execution operation in progress.This is present for statement execution operations, but not for operations such as transaction begin/end. It also is not present when the exception was raised before the
ExecutionContext
could be constructed.Note that the
ExceptionContext.statement
andExceptionContext.parameters
members may represent a different value than that of theExecutionContext
, potentially in the case where aConnectionEvents.before_cursor_execute()
event or similar modified the statement/parameters to be sent.May be None.
-
attribute
sqlalchemy.engine.ExceptionContext.
invalidate_pool_on_disconnect = True¶ Represent whether all connections in the pool should be invalidated when a “disconnect” condition is in effect.
Setting this flag to False within the scope of the
ConnectionEvents.handle_error()
event will have the effect such that the full collection of connections in the pool will not be invalidated during a disconnect; only the current connection that is the subject of the error will actually be invalidated.The purpose of this flag is for custom disconnect-handling schemes where the invalidation of other connections in the pool is to be performed based on other conditions, or even on a per-connection basis.
New in version 1.0.3.
-
attribute
sqlalchemy.engine.ExceptionContext.
is_disconnect = None¶ Represent whether the exception as occurred represents a “disconnect” condition.
This flag will always be True or False within the scope of the
ConnectionEvents.handle_error()
handler.SQLAlchemy will defer to this flag in order to determine whether or not the connection should be invalidated subsequently. That is, by assigning to this flag, a “disconnect” event which then results in a connection and pool invalidation can be invoked or prevented by changing this flag.
Note
The pool “pre_ping” handler enabled using the
create_engine.pool_pre_ping
parameter does not consult this event before deciding if the “ping” returned false, as opposed to receiving an unhandled error. For this use case, the legacy recipe based on engine_connect() may be used. A future API allow more comprehensive customization of the “disconnect” detection mechanism across all functions.
-
attribute
sqlalchemy.engine.ExceptionContext.
original_exception = None¶ The exception object which was caught.
This member is always present.
-
attribute
sqlalchemy.engine.ExceptionContext.
parameters = None¶ Parameter collection that was emitted directly to the DBAPI.
May be None.
-
attribute
sqlalchemy.engine.ExceptionContext.
sqlalchemy_exception = None¶ The
sqlalchemy.exc.StatementError
which wraps the original, and will be raised if exception handling is not circumvented by the event.May be None, as not all exception types are wrapped by SQLAlchemy. For DBAPI-level exceptions that subclass the dbapi’s Error class, this field will always be present.
-
attribute
sqlalchemy.engine.ExceptionContext.
statement = None¶ String SQL statement that was emitted directly to the DBAPI.
May be None.
-
attribute
- class sqlalchemy.engine.NestedTransaction(connection, parent)¶
Represent a ‘nested’, or SAVEPOINT transaction.
A new
NestedTransaction
object may be procured using theConnection.begin_nested()
method.The interface is the same as that of
Transaction
.Class signature
class
sqlalchemy.engine.NestedTransaction
(sqlalchemy.engine.Transaction
)
- class sqlalchemy.engine.ResultProxy(context)¶
A facade around a DBAPI cursor object.
Returns database rows via the
RowProxy
class, which provides additional API features and behaviors on top of the raw data returned by the DBAPI.Members
close(), fetchall(), fetchmany(), fetchone(), first(), inserted_primary_key, is_insert, keys(), last_inserted_params(), last_updated_params(), lastrow_has_defaults(), lastrowid, next(), postfetch_cols(), prefetch_cols(), returned_defaults, returns_rows, rowcount, scalar(), supports_sane_multi_rowcount(), supports_sane_rowcount()
See also
Selecting - introductory material for accessing
ResultProxy
andRowProxy
objects.-
method
sqlalchemy.engine.ResultProxy.
close()¶ Close this ResultProxy.
This closes out the underlying DBAPI cursor corresponding to the statement execution, if one is still present. Note that the DBAPI cursor is automatically released when the
ResultProxy
exhausts all available rows.ResultProxy.close()
is generally an optional method except in the case when discarding aResultProxy
that still has additional rows pending for fetch.In the case of a result that is the product of connectionless execution, the underlying
Connection
object is also closed, which releases DBAPI connection resources.After this method is called, it is no longer valid to call upon the fetch methods, which will raise a
ResourceClosedError
on subsequent use.Changed in version 1.0.0: - the
ResultProxy.close()
method has been separated out from the process that releases the underlying DBAPI cursor resource. The “auto close” feature of theConnection
now performs a so-called “soft close”, which releases the underlying DBAPI cursor, but allows theResultProxy
to still behave as an open-but-exhausted result set; the actualResultProxy.close()
method is never called. It is still safe to discard aResultProxy
that has been fully exhausted without calling this method.See also
-
method
sqlalchemy.engine.ResultProxy.
fetchall()¶ Fetch all rows, just like DB-API
cursor.fetchall()
.After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Subsequent calls to
ResultProxy.fetchall()
will return an empty list. After theResultProxy.close()
method is called, the method will raiseResourceClosedError
.- Returns:
a list of
RowProxy
objects
-
method
sqlalchemy.engine.ResultProxy.
fetchmany(size=None)¶ Fetch many rows, just like DB-API
cursor.fetchmany(size=cursor.arraysize)
.After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Calls to
ResultProxy.fetchmany()
after all rows have been exhausted will return an empty list. After theResultProxy.close()
method is called, the method will raiseResourceClosedError
.- Returns:
a list of
RowProxy
objects
-
method
sqlalchemy.engine.ResultProxy.
fetchone()¶ Fetch one row, just like DB-API
cursor.fetchone()
.After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Calls to
ResultProxy.fetchone()
after all rows have been exhausted will returnNone
. After theResultProxy.close()
method is called, the method will raiseResourceClosedError
.- Returns:
a
RowProxy
object, or None if no rows remain
-
method
sqlalchemy.engine.ResultProxy.
first()¶ Fetch the first row and then close the result set unconditionally.
After calling this method, the object is fully closed, e.g. the
ResultProxy.close()
method will have been called.- Returns:
a
RowProxy
object, or None if no rows remain
-
attribute
sqlalchemy.engine.ResultProxy.
inserted_primary_key¶ Return the primary key for the row just inserted.
The return value is a list of scalar values corresponding to the list of primary key columns in the target table.
This only applies to single row
insert()
constructs which did not explicitly specifyInsert.returning()
.Note that primary key columns which specify a server_default clause, or otherwise do not qualify as “autoincrement” columns (see the notes at
Column
), and were generated using the database-side default, will appear in this list asNone
unless the backend supports “returning” and the insert statement executed with the “implicit returning” enabled.Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an insert() construct.
-
attribute
sqlalchemy.engine.ResultProxy.
is_insert¶ True if this
ResultProxy
is the result of a executing an expression language compiledinsert()
construct.When True, this implies that the
inserted_primary_key
attribute is accessible, assuming the statement did not include a user defined “returning” construct.
-
method
sqlalchemy.engine.ResultProxy.
keys()¶ Return the list of string keys that would represented by each
RowProxy
.
-
method
sqlalchemy.engine.ResultProxy.
last_inserted_params()¶ Return the collection of inserted parameters from this execution.
Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an insert() construct.
-
method
sqlalchemy.engine.ResultProxy.
last_updated_params()¶ Return the collection of updated parameters from this execution.
Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an update() construct.
-
method
sqlalchemy.engine.ResultProxy.
lastrow_has_defaults()¶ Return
lastrow_has_defaults()
from the underlyingExecutionContext
.See
ExecutionContext
for details.
-
attribute
sqlalchemy.engine.ResultProxy.
lastrowid¶ Return the ‘lastrowid’ accessor on the DBAPI cursor.
This is a DBAPI specific method and is only functional for those backends which support it, for statements where it is appropriate. It’s behavior is not consistent across backends.
Usage of this method is normally unnecessary when using insert() expression constructs; the
ResultProxy.inserted_primary_key
attribute provides a tuple of primary key values for a newly inserted row, regardless of database backend.
-
method
sqlalchemy.engine.ResultProxy.
next()¶ Implement the Python next() protocol.
This method, mirrored as both
.next()
and.__next__()
, is part of Python’s API for producing iterator-like behavior.New in version 1.2.
-
method
sqlalchemy.engine.ResultProxy.
postfetch_cols()¶ Return
postfetch_cols()
from the underlyingExecutionContext
.See
ExecutionContext
for details.Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an insert() or update() construct.
-
method
sqlalchemy.engine.ResultProxy.
prefetch_cols()¶ Return
prefetch_cols()
from the underlyingExecutionContext
.See
ExecutionContext
for details.Raises
InvalidRequestError
if the executed statement is not a compiled expression construct or is not an insert() or update() construct.
-
attribute
sqlalchemy.engine.ResultProxy.
returned_defaults¶ Return the values of default columns that were fetched using the
ValuesBase.return_defaults()
feature.The value is an instance of
RowProxy
, orNone
ifValuesBase.return_defaults()
was not used or if the backend does not support RETURNING.New in version 0.9.0.
See also
-
attribute
sqlalchemy.engine.ResultProxy.
returns_rows¶ True if this
ResultProxy
returns rows.I.e. if it is legal to call the methods
ResultProxy.fetchone()
,ResultProxy.fetchmany()
ResultProxy.fetchall()
.
-
attribute
sqlalchemy.engine.ResultProxy.
rowcount¶ Return the ‘rowcount’ for this result.
The ‘rowcount’ reports the number of rows matched by the WHERE criterion of an UPDATE or DELETE statement.
Note
Notes regarding
ResultProxy.rowcount
:This attribute returns the number of rows matched, which is not necessarily the same as the number of rows that were actually modified - an UPDATE statement, for example, may have no net change on a given row if the SET values given are the same as those present in the row already. Such a row would be matched but not modified. On backends that feature both styles, such as MySQL, rowcount is configured by default to return the match count in all cases.
ResultProxy.rowcount
is only useful in conjunction with an UPDATE or DELETE statement. Contrary to what the Python DBAPI says, it does not return the number of rows available from the results of a SELECT statement as DBAPIs cannot support this functionality when rows are unbuffered.ResultProxy.rowcount
may not be fully implemented by all dialects. In particular, most DBAPIs do not support an aggregate rowcount result from an executemany call. TheResultProxy.supports_sane_rowcount()
andResultProxy.supports_sane_multi_rowcount()
methods will report from the dialect if each usage is known to be supported.Statements that use RETURNING may not return a correct rowcount.
-
method
sqlalchemy.engine.ResultProxy.
scalar()¶ Fetch the first column of the first row, and close the result set.
After calling this method, the object is fully closed, e.g. the
ResultProxy.close()
method will have been called.- Returns:
a Python scalar value , or None if no rows remain
-
method
sqlalchemy.engine.ResultProxy.
supports_sane_multi_rowcount()¶ Return
supports_sane_multi_rowcount
from the dialect.See
ResultProxy.rowcount
for background.
-
method
sqlalchemy.engine.ResultProxy.
supports_sane_rowcount()¶ Return
supports_sane_rowcount
from the dialect.See
ResultProxy.rowcount
for background.
-
method
- class sqlalchemy.engine.RowProxy(parent, row, processors, keymap)¶
Represent a single result row.
The
RowProxy
object is retrieved from a database result, from theResultProxy
object using methods likeResultProxy.fetchall()
.The
RowProxy
object seeks to act mostly like a Python named tuple, but also provides some Python dictionary behaviors at the same time.See also
Selecting - includes examples of selecting rows from SELECT statements.
Members
has_key(), items(), iterkeys(), itervalues(), keys(), values()
Class signature
class
sqlalchemy.engine.RowProxy
(sqlalchemy.engine.BaseRowProxy
)-
method
sqlalchemy.engine.RowProxy.
has_key(key)¶ Return True if this
RowProxy
contains the given key.Through the SQLAlchemy 1.x series, the
__contains__()
method ofRowProxy
also links toRowProxy.has_key()
, in that an expression such as"some_col" in row
Will return True if the row contains a column named
"some_col"
, in the way that a Python mapping works.However, it is planned that the 2.0 series of SQLAlchemy will reverse this behavior so that
__contains__()
will refer to a value being present in the row, in the way that a Python tuple works.
-
method
sqlalchemy.engine.RowProxy.
items()¶ Return a list of tuples, each tuple containing a key/value pair.
This method is analogous to the Python dictionary
.items()
method, except that it returns a list, not an iterator.
-
method
sqlalchemy.engine.RowProxy.
iterkeys()¶ Return a an iterator against the
RowProxy.keys()
method.This method is analogous to the Python-2-only dictionary
.iterkeys()
method.
-
method
sqlalchemy.engine.RowProxy.
itervalues()¶ Return a an iterator against the
RowProxy.values()
method.This method is analogous to the Python-2-only dictionary
.itervalues()
method.
-
method
sqlalchemy.engine.RowProxy.
keys()¶ Return the list of keys as strings represented by this
RowProxy
.This method is analogous to the Python dictionary
.keys()
method, except that it returns a list, not an iterator.
-
method
sqlalchemy.engine.RowProxy.
values()¶ Return the values represented by this
RowProxy
as a list.This method is analogous to the Python dictionary
.values()
method, except that it returns a list, not an iterator.
-
method
- class sqlalchemy.engine.Transaction(connection, parent)¶
Represent a database transaction in progress.
The
Transaction
object is procured by calling theConnection.begin()
method ofConnection
:from sqlalchemy import create_engine engine = create_engine("postgresql://scott:tiger@localhost/test") connection = engine.connect() trans = connection.begin() connection.execute("insert into x (a, b) values (1, 2)") trans.commit()
The object provides
rollback()
andcommit()
methods in order to control transaction boundaries. It also implements a context manager interface so that the Pythonwith
statement can be used with theConnection.begin()
method:with connection.begin(): connection.execute("insert into x (a, b) values (1, 2)")
The Transaction object is not threadsafe.
Members
-
method
sqlalchemy.engine.Transaction.
close()¶ Close this
Transaction
.If this transaction is the base transaction in a begin/commit nesting, the transaction will rollback(). Otherwise, the method returns.
This is used to cancel a Transaction without affecting the scope of an enclosing transaction.
-
method
sqlalchemy.engine.Transaction.
commit()¶ Commit this
Transaction
.
-
method
sqlalchemy.engine.Transaction.
rollback()¶ Roll back this
Transaction
.
-
method
- class sqlalchemy.engine.TwoPhaseTransaction(connection, xid)¶
Represent a two-phase transaction.
A new
TwoPhaseTransaction
object may be procured using theConnection.begin_twophase()
method.The interface is the same as that of
Transaction
with the addition of theprepare()
method.Members
Class signature
class
sqlalchemy.engine.TwoPhaseTransaction
(sqlalchemy.engine.Transaction
)-
method
sqlalchemy.engine.TwoPhaseTransaction.
prepare()¶ Prepare this
TwoPhaseTransaction
.After a PREPARE, the transaction can be committed.
-
method