Operation Reference¶
This file provides documentation on Alembic migration directives.
The directives here are used within user-defined migration files,
within the upgrade()
and downgrade()
functions, as well as
any functions further invoked by those.
All directives exist as methods on a class called Operations
.
When migration scripts are run, this object is made available
to the script via the alembic.op
datamember, which is
a proxy to an actual instance of Operations
.
Currently, alembic.op
is a real Python module, populated
with individual proxies for each method on Operations
,
so symbols can be imported safely from the alembic.op
namespace.
The Operations
system is also fully extensible. See
Operation Plugins for details on this.
A key design philosophy to the Operation Directives methods is that
to the greatest degree possible, they internally generate the
appropriate SQLAlchemy metadata, typically involving
Table
and Constraint
objects. This so that migration instructions can be
given in terms of just the string names and/or flags involved.
The exceptions to this
rule include the add_column()
and create_table()
directives, which require full Column
objects, though the table metadata is still generated here.
The functions here all require that a MigrationContext
has been
configured within the env.py
script first, which is typically
via EnvironmentContext.configure()
. Under normal
circumstances they are called from an actual migration script, which
itself would be invoked by the EnvironmentContext.run_migrations()
method.
- class alembic.operations.AbstractOperations(migration_context: MigrationContext, impl: BatchOperationsImpl | None = None)¶
Base class for Operations and BatchOperations.
New in version 1.11.0.
Construct a new
Operations
- Parameters:
migration_context – a
MigrationContext
instance.
- batch_alter_table(table_name: str, schema: str | None = None, recreate: Literal['auto', 'always', 'never'] = 'auto', partial_reordering: tuple | None = None, copy_from: Table | None = None, table_args: Tuple[Any, ...] = (), table_kwargs: Mapping[str, Any] = {}, reflect_args: Tuple[Any, ...] = (), reflect_kwargs: Mapping[str, Any] = {}, naming_convention: Dict[str, str] | None = None) Iterator[BatchOperations] ¶
Invoke a series of per-table migrations in batch.
Batch mode allows a series of operations specific to a table to be syntactically grouped together, and allows for alternate modes of table migration, in particular the “recreate” style of migration required by SQLite.
“recreate” style is as follows:
A new table is created with the new specification, based on the migration directives within the batch, using a temporary name.
the data copied from the existing table to the new table.
the existing table is dropped.
the new table is renamed to the existing table name.
The directive by default will only use “recreate” style on the SQLite backend, and only if directives are present which require this form, e.g. anything other than
add_column()
. The batch operation on other backends will proceed using standard ALTER TABLE operations.The method is used as a context manager, which returns an instance of
BatchOperations
; this object is the same asOperations
except that table names and schema names are omitted. E.g.:with op.batch_alter_table("some_table") as batch_op: batch_op.add_column(Column("foo", Integer)) batch_op.drop_column("bar")
The operations within the context manager are invoked at once when the context is ended. When run against SQLite, if the migrations include operations not supported by SQLite’s ALTER TABLE, the entire table will be copied to a new one with the new specification, moving all data across as well.
The copy operation by default uses reflection to retrieve the current structure of the table, and therefore
batch_alter_table()
in this mode requires that the migration is run in “online” mode. Thecopy_from
parameter may be passed which refers to an existingTable
object, which will bypass this reflection step.Note
The table copy operation will currently not copy CHECK constraints, and may not copy UNIQUE constraints that are unnamed, as is possible on SQLite. See the section Dealing with Constraints for workarounds.
- Parameters:
table_name – name of table
schema – optional schema name.
recreate – under what circumstances the table should be recreated. At its default of
"auto"
, the SQLite dialect will recreate the table if any operations other thanadd_column()
,create_index()
, ordrop_index()
are present. Other options include"always"
and"never"
.copy_from –
optional
Table
object that will act as the structure of the table being copied. If omitted, table reflection is used to retrieve the structure of the table.reflect_args – a sequence of additional positional arguments that will be applied to the table structure being reflected / copied; this may be used to pass column and constraint overrides to the table that will be reflected, in lieu of passing the whole
Table
usingcopy_from
.reflect_kwargs – a dictionary of additional keyword arguments that will be applied to the table structure being copied; this may be used to pass additional table and reflection options to the table that will be reflected, in lieu of passing the whole
Table
usingcopy_from
.table_args – a sequence of additional positional arguments that will be applied to the new
Table
when created, in addition to those copied from the source table. This may be used to provide additional constraints such as CHECK constraints that may not be reflected.table_kwargs – a dictionary of additional keyword arguments that will be applied to the new
Table
when created, in addition to those copied from the source table. This may be used to provide for additional table options that may not be reflected.naming_convention –
a naming convention dictionary of the form described at Integration of Naming Conventions into Operations, Autogenerate which will be applied to the
MetaData
during the reflection process. This is typically required if one wants to drop SQLite constraints, as these constraints will not have names when reflected on this backend. Requires SQLAlchemy 0.9.4 or greater.partial_reordering –
a list of tuples, each suggesting a desired ordering of two or more columns in the newly created table. Requires that
batch_alter_table.recreate
is set to"always"
. Examples, given a table with columns “a”, “b”, “c”, and “d”:Specify the order of all columns:
with op.batch_alter_table( "some_table", recreate="always", partial_reordering=[("c", "d", "a", "b")], ) as batch_op: pass
Ensure “d” appears before “c”, and “b”, appears before “a”:
with op.batch_alter_table( "some_table", recreate="always", partial_reordering=[("d", "c"), ("b", "a")], ) as batch_op: pass
The ordering of columns not included in the partial_reordering set is undefined. Therefore it is best to specify the complete ordering of all columns for best results.
New in version 1.4.0.
Note
batch mode requires SQLAlchemy 0.8 or above.
- f(name: str) conv ¶
Indicate a string name that has already had a naming convention applied to it.
This feature combines with the SQLAlchemy
naming_convention
feature to disambiguate constraint names that have already had naming conventions applied to them, versus those that have not. This is necessary in the case that the"%(constraint_name)s"
token is used within a naming convention, so that it can be identified that this particular name should remain fixed.If the
Operations.f()
is used on a constraint, the naming convention will not take effect:op.add_column("t", "x", Boolean(name=op.f("ck_bool_t_x")))
Above, the CHECK constraint generated will have the name
ck_bool_t_x
regardless of whether or not a naming convention is in use.Alternatively, if a naming convention is in use, and ‘f’ is not used, names will be converted along conventions. If the
target_metadata
contains the naming convention{"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}
, then the output of the following:op.add_column(“t”, “x”, Boolean(name=”x”))
will be:
CONSTRAINT ck_bool_t_x CHECK (x in (1, 0)))
The function is rendered in the output of autogenerate when a particular constraint name is already converted.
- get_bind() Connection ¶
Return the current ‘bind’.
Under normal circumstances, this is the
Connection
currently being used to emit SQL to the database.In a SQL script context, this value is
None
. [TODO: verify this]
- get_context() MigrationContext ¶
Return the
MigrationContext
object that’s currently in use.
- classmethod implementation_for(op_cls: Any) Callable[[...], Any] ¶
Register an implementation for a given
MigrateOperation
.This is part of the operation extensibility API.
See also
Operation Plugins - example of use
- inline_literal(value: str | int, type_: TypeEngine[Any] | None = None) _literal_bindparam ¶
Produce an ‘inline literal’ expression, suitable for using in an INSERT, UPDATE, or DELETE statement.
When using Alembic in “offline” mode, CRUD operations aren’t compatible with SQLAlchemy’s default behavior surrounding literal values, which is that they are converted into bound values and passed separately into the
execute()
method of the DBAPI cursor. An offline SQL script needs to have these rendered inline. While it should always be noted that inline literal values are an enormous security hole in an application that handles untrusted input, a schema migration is not run in this context, so literals are safe to render inline, with the caveat that advanced types like dates may not be supported directly by SQLAlchemy.See
Operations.execute()
for an example usage ofOperations.inline_literal()
.The environment can also be configured to attempt to render “literal” values inline automatically, for those simple types that are supported by the dialect; see
EnvironmentContext.configure.literal_binds
for this more recently added feature.- Parameters:
value – The value to render. Strings, integers, and simple numerics should be supported. Other types like boolean, dates, etc. may or may not be supported yet by various backends.
type_ – optional - a
sqlalchemy.types.TypeEngine
subclass stating the type of this value. In SQLAlchemy expressions, this is usually derived automatically from the Python type of the value itself, as well as based on the context in which the value is used.
- invoke(operation: MigrateOperation) Any ¶
Given a
MigrateOperation
, invoke it in terms of thisOperations
instance.
- classmethod register_operation(name: str, sourcename: str | None = None) Callable[[...], Any] ¶
Register a new operation for this class.
This method is normally used to add new operations to the
Operations
class, and possibly theBatchOperations
class as well. All Alembic migration operations are implemented via this system, however the system is also available as a public API to facilitate adding custom operations.See also
- run_async(async_function: Callable[[...], Awaitable[_T]], *args: Any, **kw_args: Any) _T ¶
Invoke the given asynchronous callable, passing an asynchronous
AsyncConnection
as the first argument.This method allows calling async functions from within the synchronous
upgrade()
ordowngrade()
alembic migration method.The async connection passed to the callable shares the same transaction as the connection running in the migration context.
Any additional arg or kw_arg passed to this function are passed to the provided async function.
Note
This method can be called only when alembic is called using an async dialect.
- class alembic.operations.Operations(migration_context: MigrationContext, impl: BatchOperationsImpl | None = None)¶
Define high level migration operations.
Each operation corresponds to some schema migration operation, executed against a particular
MigrationContext
which in turn represents connectivity to a database, or a file output stream.While
Operations
is normally configured as part of theEnvironmentContext.run_migrations()
method called from anenv.py
script, a standaloneOperations
instance can be made for use cases external to regular Alembic migrations by passing in aMigrationContext
:from alembic.migration import MigrationContext from alembic.operations import Operations conn = myengine.connect() ctx = MigrationContext.configure(conn) op = Operations(ctx) op.alter_column("t", "c", nullable=True)
Note that as of 0.8, most of the methods on this class are produced dynamically using the
Operations.register_operation()
method.Construct a new
Operations
- Parameters:
migration_context – a
MigrationContext
instance.
- add_column(table_name: str, column: Column, *, schema: str | None = None) None ¶
Issue an “add column” instruction using the current migration context.
e.g.:
from alembic import op from sqlalchemy import Column, String op.add_column("organization", Column("name", String()))
The
Operations.add_column()
method typically corresponds to the SQL command “ALTER TABLE… ADD COLUMN”. Within the scope of this command, the column’s name, datatype, nullability, and optional server-generated defaults may be indicated.Note
With the exception of NOT NULL constraints or single-column FOREIGN KEY constraints, other kinds of constraints such as PRIMARY KEY, UNIQUE or CHECK constraints cannot be generated using this method; for these constraints, refer to operations such as
Operations.create_primary_key()
andOperations.create_check_constraint()
. In particular, the followingColumn
parameters are ignored:primary_key
- SQL databases typically do not support an ALTER operation that can add individual columns one at a time to an existing primary key constraint, therefore it’s less ambiguous to use theOperations.create_primary_key()
method, which assumes no existing primary key constraint is present.unique
- use theOperations.create_unique_constraint()
methodindex
- use theOperations.create_index()
method
The provided
Column
object may include aForeignKey
constraint directive, referencing a remote table name. For this specific type of constraint, Alembic will automatically emit a second ALTER statement in order to add the single-column FOREIGN KEY constraint separately:from alembic import op from sqlalchemy import Column, INTEGER, ForeignKey op.add_column( "organization", Column("account_id", INTEGER, ForeignKey("accounts.id")), )
The column argument passed to
Operations.add_column()
is aColumn
construct, used in the same way it’s used in SQLAlchemy. In particular, values or functions to be indicated as producing the column’s default value on the database side are specified using theserver_default
parameter, and notdefault
which only specifies Python-side defaults:from alembic import op from sqlalchemy import Column, TIMESTAMP, func # specify "DEFAULT NOW" along with the column add op.add_column( "account", Column("timestamp", TIMESTAMP, server_default=func.now()), )
- Parameters:
table_name – String name of the parent table.
column – a
sqlalchemy.schema.Column
object representing the new column.schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.
- alter_column(table_name: str, column_name: str, *, nullable: bool | None = None, comment: str | Literal[False] | None = False, server_default: Any = False, new_column_name: str | None = None, type_: TypeEngine | Type[TypeEngine] | None = None, existing_type: TypeEngine | Type[TypeEngine] | None = None, existing_server_default: str | bool | Identity | Computed | None = False, existing_nullable: bool | None = None, existing_comment: str | None = None, schema: str | None = None, **kw: Any) None ¶
Issue an “alter column” instruction using the current migration context.
Generally, only that aspect of the column which is being changed, i.e. name, type, nullability, default, needs to be specified. Multiple changes can also be specified at once and the backend should “do the right thing”, emitting each change either separately or together as the backend allows.
MySQL has special requirements here, since MySQL cannot ALTER a column without a full specification. When producing MySQL-compatible migration files, it is recommended that the
existing_type
,existing_server_default
, andexisting_nullable
parameters be present, if not being altered.Type changes which are against the SQLAlchemy “schema” types
Boolean
andEnum
may also add or drop constraints which accompany those types on backends that don’t support them natively. Theexisting_type
argument is used in this case to identify and remove a previous constraint that was bound to the type object.- Parameters:
table_name – string name of the target table.
column_name – string name of the target column, as it exists before the operation begins.
nullable – Optional; specify
True
orFalse
to alter the column’s nullability.server_default – Optional; specify a string SQL expression,
text()
, orDefaultClause
to indicate an alteration to the column’s default value. Set toNone
to have the default removed.comment –
optional string text of a new comment to add to the column.
New in version 1.0.6.
new_column_name – Optional; specify a string name here to indicate the new name within a column rename operation.
type_ – Optional; a
TypeEngine
type object to specify a change to the column’s type. For SQLAlchemy types that also indicate a constraint (i.e.Boolean
,Enum
), the constraint is also generated.autoincrement – set the
AUTO_INCREMENT
flag of the column; currently understood by the MySQL dialect.existing_type – Optional; a
TypeEngine
type object to specify the previous type. This is required for all MySQL column alter operations that don’t otherwise specify a new type, as well as for when nullability is being changed on a SQL Server column. It is also used if the type is a so-called SQLlchemy “schema” type which may define a constraint (i.e.Boolean
,Enum
), so that the constraint can be dropped.existing_server_default – Optional; The existing default value of the column. Required on MySQL if an existing default is not being changed; else MySQL removes the default.
existing_nullable – Optional; the existing nullability of the column. Required on MySQL if the existing nullability is not being changed; else MySQL sets this to NULL.
existing_autoincrement – Optional; the existing autoincrement of the column. Used for MySQL’s system of altering a column that specifies
AUTO_INCREMENT
.existing_comment –
string text of the existing comment on the column to be maintained. Required on MySQL if the existing comment on the column is not being changed.
New in version 1.0.6.
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.postgresql_using – String argument which will indicate a SQL expression to render within the Postgresql-specific USING clause within ALTER COLUMN. This string is taken directly as raw SQL which must explicitly include any necessary quoting or escaping of tokens within the expression.
- bulk_insert(table: Table | TableClause, rows: List[dict], *, multiinsert: bool = True) None ¶
Issue a “bulk insert” operation using the current migration context.
This provides a means of representing an INSERT of multiple rows which works equally well in the context of executing on a live connection as well as that of generating a SQL script. In the case of a SQL script, the values are rendered inline into the statement.
e.g.:
from alembic import op from datetime import date from sqlalchemy.sql import table, column from sqlalchemy import String, Integer, Date # Create an ad-hoc table to use for the insert statement. accounts_table = table( "account", column("id", Integer), column("name", String), column("create_date", Date), ) op.bulk_insert( accounts_table, [ { "id": 1, "name": "John Smith", "create_date": date(2010, 10, 5), }, { "id": 2, "name": "Ed Williams", "create_date": date(2007, 5, 27), }, { "id": 3, "name": "Wendy Jones", "create_date": date(2008, 8, 15), }, ], )
When using –sql mode, some datatypes may not render inline automatically, such as dates and other special types. When this issue is present,
Operations.inline_literal()
may be used:op.bulk_insert( accounts_table, [ { "id": 1, "name": "John Smith", "create_date": op.inline_literal("2010-10-05"), }, { "id": 2, "name": "Ed Williams", "create_date": op.inline_literal("2007-05-27"), }, { "id": 3, "name": "Wendy Jones", "create_date": op.inline_literal("2008-08-15"), }, ], multiinsert=False, )
When using
Operations.inline_literal()
in conjunction withOperations.bulk_insert()
, in order for the statement to work in “online” (e.g. non –sql) mode, themultiinsert
flag should be set toFalse
, which will have the effect of individual INSERT statements being emitted to the database, each with a distinct VALUES clause, so that the “inline” values can still be rendered, rather than attempting to pass the values as bound parameters.- Parameters:
table – a table object which represents the target of the INSERT.
rows – a list of dictionaries indicating rows.
multiinsert – when at its default of True and –sql mode is not enabled, the INSERT statement will be executed using “executemany()” style, where all elements in the list of dictionaries are passed as bound parameters in a single list. Setting this to False results in individual INSERT statements being emitted per parameter set, and is needed in those cases where non-literal values are present in the parameter sets.
- create_check_constraint(constraint_name: str | None, table_name: str, condition: str | BinaryExpression | TextClause, *, schema: str | None = None, **kw: Any) None ¶
Issue a “create check constraint” instruction using the current migration context.
e.g.:
from alembic import op from sqlalchemy.sql import column, func op.create_check_constraint( "ck_user_name_len", "user", func.len(column("name")) > 5, )
CHECK constraints are usually against a SQL expression, so ad-hoc table metadata is usually needed. The function will convert the given arguments into a
sqlalchemy.schema.CheckConstraint
bound to an anonymous table in order to emit the CREATE statement.- Parameters:
name – Name of the check constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions,
name
here can beNone
, as the event listener will apply the name to the constraint object when it is associated with the table.table_name – String name of the source table.
condition – SQL expression that’s the condition of the constraint. Can be a string or SQLAlchemy expression language structure.
deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.
- create_exclude_constraint(constraint_name: str, table_name: str, *elements: Any, **kw: Any) Table | None ¶
Issue an alter to create an EXCLUDE constraint using the current migration context.
Note
This method is Postgresql specific, and additionally requires at least SQLAlchemy 1.0.
e.g.:
from alembic import op op.create_exclude_constraint( "user_excl", "user", ("period", "&&"), ("group", "="), where=("group != 'some group'"), )
Note that the expressions work the same way as that of the
ExcludeConstraint
object itself; if plain strings are passed, quoting rules must be applied manually.- Parameters:
name – Name of the constraint.
table_name – String name of the source table.
elements – exclude conditions.
where – SQL expression or SQL string with optional WHERE clause.
deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
schema – Optional schema name to operate within.
- create_foreign_key(constraint_name: str | None, source_table: str, referent_table: str, local_cols: List[str], remote_cols: List[str], *, onupdate: str | None = None, ondelete: str | None = None, deferrable: bool | None = None, initially: str | None = None, match: str | None = None, source_schema: str | None = None, referent_schema: str | None = None, **dialect_kw: Any) None ¶
Issue a “create foreign key” instruction using the current migration context.
e.g.:
from alembic import op op.create_foreign_key( "fk_user_address", "address", "user", ["user_id"], ["id"], )
This internally generates a
Table
object containing the necessary columns, then generates a newForeignKeyConstraint
object which it then associates with theTable
. Any event listeners associated with this action will be fired off normally. TheAddConstraint
construct is ultimately used to generate the ALTER statement.- Parameters:
constraint_name – Name of the foreign key constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions,
name
here can beNone
, as the event listener will apply the name to the constraint object when it is associated with the table.source_table – String name of the source table.
referent_table – String name of the destination table.
local_cols – a list of string column names in the source table.
remote_cols – a list of string column names in the remote table.
onupdate – Optional string. If set, emit ON UPDATE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.
ondelete – Optional string. If set, emit ON DELETE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.
deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
source_schema – Optional schema name of the source table.
referent_schema – Optional schema name of the destination table.
- create_index(index_name: str | None, table_name: str, columns: Sequence[str | TextClause | Function[Any]], *, schema: str | None = None, unique: bool = False, **kw: Any) None ¶
Issue a “create index” instruction using the current migration context.
e.g.:
from alembic import op op.create_index("ik_test", "t1", ["foo", "bar"])
Functional indexes can be produced by using the
sqlalchemy.sql.expression.text()
construct:from alembic import op from sqlalchemy import text op.create_index("ik_test", "t1", [text("lower(foo)")])
- Parameters:
index_name – name of the index.
table_name – name of the owning table.
columns – a list consisting of string column names and/or
text()
constructs.schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.unique – If True, create a unique index.
quote – Force quoting of this column’s name on or off, corresponding to
True
orFalse
. When left at its default ofNone
, the column identifier will be quoted according to whether the name is case sensitive (identifiers with at least one upper case character are treated as case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect.**kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form
<dialectname>_<argname>
. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.
- create_primary_key(constraint_name: str | None, table_name: str, columns: List[str], *, schema: str | None = None) None ¶
Issue a “create primary key” instruction using the current migration context.
e.g.:
from alembic import op op.create_primary_key("pk_my_table", "my_table", ["id", "version"])
This internally generates a
Table
object containing the necessary columns, then generates a newPrimaryKeyConstraint
object which it then associates with theTable
. Any event listeners associated with this action will be fired off normally. TheAddConstraint
construct is ultimately used to generate the ALTER statement.- Parameters:
constraint_name – Name of the primary key constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions
name
here can beNone
, as the event listener will apply the name to the constraint object when it is associated with the table.table_name – String name of the target table.
columns – a list of string column names to be applied to the primary key constraint.
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.
- create_table(table_name: str, *columns: SchemaItem, **kw: Any) Table ¶
Issue a “create table” instruction using the current migration context.
This directive receives an argument list similar to that of the traditional
sqlalchemy.schema.Table
construct, but without the metadata:from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column from alembic import op op.create_table( "account", Column("id", INTEGER, primary_key=True), Column("name", VARCHAR(50), nullable=False), Column("description", NVARCHAR(200)), Column("timestamp", TIMESTAMP, server_default=func.now()), )
Note that
create_table()
acceptsColumn
constructs directly from the SQLAlchemy library. In particular, default values to be created on the database side are specified using theserver_default
parameter, and notdefault
which only specifies Python-side defaults:from alembic import op from sqlalchemy import Column, TIMESTAMP, func # specify "DEFAULT NOW" along with the "timestamp" column op.create_table( "account", Column("id", INTEGER, primary_key=True), Column("timestamp", TIMESTAMP, server_default=func.now()), )
The function also returns a newly created
Table
object, corresponding to the table specification given, which is suitable for immediate SQL operations, in particularOperations.bulk_insert()
:from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column from alembic import op account_table = op.create_table( "account", Column("id", INTEGER, primary_key=True), Column("name", VARCHAR(50), nullable=False), Column("description", NVARCHAR(200)), Column("timestamp", TIMESTAMP, server_default=func.now()), ) op.bulk_insert( account_table, [ {"name": "A1", "description": "account 1"}, {"name": "A2", "description": "account 2"}, ], )
- Parameters:
table_name – Name of the table
*columns – collection of
Column
objects within the table, as well as optionalConstraint
objects andIndex
objects.schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.**kw – Other keyword arguments are passed to the underlying
sqlalchemy.schema.Table
object created for the command.
- Returns:
the
Table
object corresponding to the parameters given.
- create_table_comment(table_name: str, comment: str | None, *, existing_comment: str | None = None, schema: str | None = None) None ¶
Emit a COMMENT ON operation to set the comment for a table.
New in version 1.0.6.
- Parameters:
table_name – string name of the target table.
comment – string value of the comment being registered against the specified table.
existing_comment – String value of a comment already registered on the specified table, used within autogenerate so that the operation is reversible, but not required for direct use.
- create_unique_constraint(constraint_name: str | None, table_name: str, columns: Sequence[str], *, schema: str | None = None, **kw: Any) Any ¶
Issue a “create unique constraint” instruction using the current migration context.
e.g.:
from alembic import op op.create_unique_constraint("uq_user_name", "user", ["name"])
This internally generates a
Table
object containing the necessary columns, then generates a newUniqueConstraint
object which it then associates with theTable
. Any event listeners associated with this action will be fired off normally. TheAddConstraint
construct is ultimately used to generate the ALTER statement.- Parameters:
name – Name of the unique constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at Configuring Constraint Naming Conventions,
name
here can beNone
, as the event listener will apply the name to the constraint object when it is associated with the table.table_name – String name of the source table.
columns – a list of string column names in the source table.
deferrable – optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
initially – optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.
- drop_column(table_name: str, column_name: str, *, schema: str | None = None, **kw: Any) None ¶
Issue a “drop column” instruction using the current migration context.
e.g.:
drop_column("organization", "account_id")
- Parameters:
table_name – name of table
column_name – name of column
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.mssql_drop_check – Optional boolean. When
True
, on Microsoft SQL Server only, first drop the CHECK constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.check_constraints, then exec’s a separate DROP CONSTRAINT for that constraint.mssql_drop_default – Optional boolean. When
True
, on Microsoft SQL Server only, first drop the DEFAULT constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.default_constraints, then exec’s a separate DROP CONSTRAINT for that default.mssql_drop_foreign_key – Optional boolean. When
True
, on Microsoft SQL Server only, first drop a single FOREIGN KEY constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.foreign_keys/sys.foreign_key_columns, then exec’s a separate DROP CONSTRAINT for that default. Only works if the column has exactly one FK constraint which refers to it, at the moment.
- drop_constraint(constraint_name: str, table_name: str, *, type_: str | None = None, schema: str | None = None) None ¶
Drop a constraint of the given name, typically via DROP CONSTRAINT.
- Parameters:
constraint_name – name of the constraint.
table_name – table name.
type_ – optional, required on MySQL. can be ‘foreignkey’, ‘primary’, ‘unique’, or ‘check’.
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.
- drop_index(index_name: str, *, table_name: str | None = None, schema: str | None = None, **kw: Any) None ¶
Issue a “drop index” instruction using the current migration context.
e.g.:
drop_index("accounts")
- Parameters:
index_name – name of the index.
table_name – name of the owning table. Some backends such as Microsoft SQL Server require this.
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.**kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form
<dialectname>_<argname>
. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.
- drop_table(table_name: str, *, schema: str | None = None, **kw: Any) None ¶
Issue a “drop table” instruction using the current migration context.
e.g.:
drop_table("accounts")
- Parameters:
table_name – Name of the table
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.**kw – Other keyword arguments are passed to the underlying
sqlalchemy.schema.Table
object created for the command.
- drop_table_comment(table_name: str, *, existing_comment: str | None = None, schema: str | None = None) None ¶
Issue a “drop table comment” operation to remove an existing comment set on a table.
New in version 1.0.6.
- Parameters:
table_name – string name of the target table.
existing_comment – An optional string value of a comment already registered on the specified table.
- execute(sqltext: str | TextClause | Update, *, execution_options: dict[str, Any] | None = None) None ¶
Execute the given SQL using the current migration context.
The given SQL can be a plain string, e.g.:
op.execute("INSERT INTO table (foo) VALUES ('some value')")
Or it can be any kind of Core SQL Expression construct, such as below where we use an update construct:
from sqlalchemy.sql import table, column from sqlalchemy import String from alembic import op account = table("account", column("name", String)) op.execute( account.update() .where(account.c.name == op.inline_literal("account 1")) .values({"name": op.inline_literal("account 2")}) )
Above, we made use of the SQLAlchemy
sqlalchemy.sql.expression.table()
andsqlalchemy.sql.expression.column()
constructs to make a brief, ad-hoc table construct just for our UPDATE statement. A fullTable
construct of course works perfectly fine as well, though note it’s a recommended practice to at least ensure the definition of a table is self-contained within the migration script, rather than imported from a module that may break compatibility with older migrations.In a SQL script context, the statement is emitted directly to the output stream. There is no return result, however, as this function is oriented towards generating a change script that can run in “offline” mode. Additionally, parameterized statements are discouraged here, as they will not work in offline mode. Above, we use
inline_literal()
where parameters are to be used.For full interaction with a connected database where parameters can also be used normally, use the “bind” available from the context:
from alembic import op connection = op.get_bind() connection.execute( account.update() .where(account.c.name == "account 1") .values({"name": "account 2"}) )
Additionally, when passing the statement as a plain string, it is first coerceed into a
sqlalchemy.sql.expression.text()
construct before being passed along. In the less likely case that the literal SQL string contains a colon, it must be escaped with a backslash, as:op.execute(r"INSERT INTO table (foo) VALUES ('\:colon_value')")
- Parameters:
sqltext – Any legal SQLAlchemy expression, including:
a string
a
sqlalchemy.sql.expression.text()
construct.a
sqlalchemy.sql.expression.insert()
construct.a
sqlalchemy.sql.expression.update()
,sqlalchemy.sql.expression.insert()
, orsqlalchemy.sql.expression.delete()
construct.Any “executable” described in SQLAlchemy Core documentation, noting that no result set is returned.
Note
when passing a plain string, the statement is coerced into a
sqlalchemy.sql.expression.text()
construct. This construct considers symbols with colons, e.g.:foo
to be bound parameters. To avoid this, ensure that colon symbols are escaped, e.g.\:foo
.- Parameters:
execution_options – Optional dictionary of execution options, will be passed to
sqlalchemy.engine.Connection.execution_options()
.
- rename_table(old_table_name: str, new_table_name: str, *, schema: str | None = None) None ¶
Emit an ALTER TABLE to rename a table.
- Parameters:
old_table_name – old name.
new_table_name – new name.
schema – Optional schema name to operate within. To control quoting of the schema outside of the default behavior, use the SQLAlchemy construct
quoted_name
.
- class alembic.operations.BatchOperations(migration_context: MigrationContext, impl: BatchOperationsImpl | None = None)¶
Modifies the interface
Operations
for batch mode.This basically omits the
table_name
andschema
parameters from associated methods, as these are a given when running under batch mode.See also
Operations.batch_alter_table()
Note that as of 0.8, most of the methods on this class are produced dynamically using the
Operations.register_operation()
method.Construct a new
Operations
- Parameters:
migration_context – a
MigrationContext
instance.
- add_column(column: Column, *, insert_before: str | None = None, insert_after: str | None = None) None ¶
Issue an “add column” instruction using the current batch migration context.
See also
- alter_column(column_name: str, *, nullable: bool | None = None, comment: str | Literal[False] | None = False, server_default: Any = False, new_column_name: str | None = None, type_: TypeEngine | Type[TypeEngine] | None = None, existing_type: TypeEngine | Type[TypeEngine] | None = None, existing_server_default: str | bool | Identity | Computed | None = False, existing_nullable: bool | None = None, existing_comment: str | None = None, insert_before: str | None = None, insert_after: str | None = None, **kw: Any) None ¶
Issue an “alter column” instruction using the current batch migration context.
Parameters are the same as that of
Operations.alter_column()
, as well as the following option(s):- Parameters:
insert_before –
String name of an existing column which this column should be placed before, when creating the new table.
New in version 1.4.0.
insert_after –
String name of an existing column which this column should be placed after, when creating the new table. If both
BatchOperations.alter_column.insert_before
andBatchOperations.alter_column.insert_after
are omitted, the column is inserted after the last existing column in the table.New in version 1.4.0.
See also
- create_check_constraint(constraint_name: str, condition: str | BinaryExpression | TextClause, **kw: Any) None ¶
Issue a “create check constraint” instruction using the current batch migration context.
The batch form of this call omits the
source
andschema
arguments from the call.See also
- create_exclude_constraint(constraint_name: str, *elements: Any, **kw: Any)¶
Issue a “create exclude constraint” instruction using the current batch migration context.
Note
This method is Postgresql specific, and additionally requires at least SQLAlchemy 1.0.
- create_foreign_key(constraint_name: str, referent_table: str, local_cols: List[str], remote_cols: List[str], *, referent_schema: str | None = None, onupdate: str | None = None, ondelete: str | None = None, deferrable: bool | None = None, initially: str | None = None, match: str | None = None, **dialect_kw: Any) None ¶
Issue a “create foreign key” instruction using the current batch migration context.
The batch form of this call omits the
source
andsource_schema
arguments from the call.e.g.:
with batch_alter_table("address") as batch_op: batch_op.create_foreign_key( "fk_user_address", "user", ["user_id"], ["id"], )
See also
- create_index(index_name: str, columns: List[str], **kw: Any) None ¶
Issue a “create index” instruction using the current batch migration context.
See also
- create_primary_key(constraint_name: str, columns: List[str]) None ¶
Issue a “create primary key” instruction using the current batch migration context.
The batch form of this call omits the
table_name
andschema
arguments from the call.See also
- create_table_comment(comment: str | None, *, existing_comment: str | None = None) None ¶
Emit a COMMENT ON operation to set the comment for a table using the current batch migration context.
New in version 1.6.0.
- Parameters:
comment – string value of the comment being registered against the specified table.
existing_comment – String value of a comment already registered on the specified table, used within autogenerate so that the operation is reversible, but not required for direct use.
- create_unique_constraint(constraint_name: str, columns: Sequence[str], **kw: Any) Any ¶
Issue a “create unique constraint” instruction using the current batch migration context.
The batch form of this call omits the
source
andschema
arguments from the call.
- drop_column(column_name: str, **kw: Any) None ¶
Issue a “drop column” instruction using the current batch migration context.
See also
- drop_constraint(constraint_name: str, *, type_: str | None = None) None ¶
Issue a “drop constraint” instruction using the current batch migration context.
The batch form of this call omits the
table_name
andschema
arguments from the call.See also
- drop_index(index_name: str, **kw: Any) None ¶
Issue a “drop index” instruction using the current batch migration context.
See also
- drop_table_comment(*, existing_comment: str | None = None) None ¶
Issue a “drop table comment” operation to remove an existing comment set on a table using the current batch operations context.
New in version 1.6.0.
- Parameters:
existing_comment – An optional string value of a comment already registered on the specified table.