NOVA-MANAGE(1) | nova | NOVA-MANAGE(1) |
nova-manage - Management tool for the OpenStack Compute services.
nova-manage <category> [<action> [<options>...]]
nova-manage controls cloud computing instances by managing various admin-only aspects of Nova.
The standard pattern for executing a nova-manage command is:
nova-manage <category> <command> [<args>]
Run without arguments to see a list of available command categories:
nova-manage
You can also run with a category argument such as db to see a list of all commands in that category:
nova-manage db
These sections describe the available categories and arguments for nova-manage.
These options apply to all commands and may be given in any order, before or after commands. Individual commands may provide additional options. Options without an argument can be combined after a single dash.
Returns exit code 0 if the database schema was synced successfully, or 1 if cell0 cannot be accessed.
Return Codes
Return code | Description |
0 | Nothing was archived. |
1 | Some number of rows were archived. |
2 | Invalid value for --max_rows. |
3 | No connection to the API database could be established using api_database.connection. |
4 | Invalid value for --before. |
255 | An unexpected error occurred. |
If automating, this should be run continuously while the result is 1, stopping at 0, or use the --until-complete option.
--max-count controls the maximum number of objects to migrate in a given call. If not specified, migration will occur in batches of 50 until fully complete.
Returns exit code 0 if no (further) updates are possible, 1 if the --max-count option was used and some updates were completed successfully (even if others generated errors), 2 if some updates generated errors and no other migrations were able to take effect in the last batch attempted, or 127 if invalid input is provided (e.g. non-numeric max-count).
This command should be called after upgrading database schema and nova services on all controller nodes. If it exits with partial updates (exit status 1) it should be called again, even if some updates initially generated errors, because some updates may depend on others having completed. If it exits with status 2, intervention is required to resolve the issue causing remaining updates to fail. It should be considered successfully completed only when the exit status is 0.
For example:
$ nova-manage db online_data_migrations Running batches of 50 until complete 2 rows matched query migrate_instances_add_request_spec, 0 migrated 2 rows matched query populate_queued_for_delete, 2 migrated +---------------------------------------------+--------------+-----------+ | Migration | Total Needed | Completed | +---------------------------------------------+--------------+-----------+ | create_incomplete_consumers | 0 | 0 | | migrate_instances_add_request_spec | 2 | 0 | | migrate_quota_classes_to_api_db | 0 | 0 | | migrate_quota_limits_to_api_db | 0 | 0 | | migration_migrate_to_uuid | 0 | 0 | | populate_missing_availability_zones | 0 | 0 | | populate_queued_for_delete | 2 | 2 | | populate_uuids | 0 | 0 | +---------------------------------------------+--------------+-----------+
In the above example, the migrate_instances_add_request_spec migration found two candidate records but did not need to perform any kind of data migration for either of them. In the case of the populate_queued_for_delete migration, two candidate records were found which did require a data migration. Since --max-count defaults to 50 and only two records were migrated with no more candidates remaining, the command completed successfully with exit code 0.
To migrate a specific host and node, provide the hostname and node uuid with --host $hostname --node $uuid. To migrate all instances on nodes managed by a single host, provide only --host. To iterate over all nodes in the system in a single pass, use --all. Note that this process is not lightweight, so it should not be run frequently without cause, although it is not harmful to do so. If you have multiple cellsv2 cells, you should run this once per cell with the corresponding cell config for each (i.e. this does not iterate cells automatically).
Note that this is not recommended unless you need to run this specific data migration offline, and it should be used with care as the work done is non-trivial. Running smaller and more targeted batches (such as specific nodes) is recommended.
# Purge shadow table rows older than a specific date nova-manage db purge --before 2015-10-21 # or nova-manage db purge --before "Oct 21 2015" # Times are also accepted nova-manage db purge --before "2015-10-21 12:00"
Note that relative dates (such as yesterday) are not supported natively. The date command can be helpful here:
# Archive deleted rows more than one month old nova-manage db archive_deleted_rows --before "$(date -d 'now - 1 month')"
In the 18.0.0 Rocky or 19.0.0 Stein release, this command will also upgrade the optional placement database if [placement_database]/connection is configured.
Returns exit code 0 if the database schema was synced successfully. This command should be run before nova-manage db sync.
If --max-count is not specified, all instances in the cell will be mapped in batches of 50. If you have a large number of instances, consider specifying a custom value and run the command until it exits with 0.
Return Codes
Return code | Description |
0 | All instances have been mapped. |
1 | There are still instances to be mapped. |
127 | Invalid value for --max-count. |
255 | An unexpected error occurred. |
This command should be run once after all compute hosts have been deployed and should not be run in parallel. When run in parallel, the commands will collide with each other trying to map the same hosts in the database at the same time.
The meaning of the various exit codes returned by this command are explained below:
NOTE:
The scheduler will not notice that a cell has been enabled/disabled until it is restarted or sent the SIGHUP signal.
NOTE:
Also if the instance has any port attached that has resource request (e.g.
:neutron-doc:`Quality of Service (QoS): Guaranteed Bandwidth <admin/config-qos-min-bw.html>`) but the corresponding allocation is not found then the allocation is created against the network device resource providers according to the resource request of that port. It is possible that the missing allocation cannot be created either due to not having enough resource inventory on the host the instance resides on or because more than one resource provider could fulfill the request. In this case the instance needs to be manually deleted or the port needs to be detached. When nova supports migrating instances with guaranteed bandwidth ports, migration will heal missing allocations for these instances.
Before the allocations for the ports are persisted in placement nova-manage tries to update each port in neutron to refer to the resource provider UUID which provides the requested resources. If any of the port updates fail in neutron or the allocation update fails in placement the command tries to roll back the partial updates to the ports. If the roll back fails then the process stops with exit code 7 and the admin needs to do the rollback in neutron manually according to the description in the exit code section.
There is also a special case handled for instances that do have allocations created before Placement API microversion 1.8 where project_id and user_id values were required. For those types of allocations, the project_id and user_id are updated using the values from the instance.
Specify --max-count to control the maximum number of instances to process. If not specified, all instances in each cell will be mapped in batches of 50. If you have a large number of instances, consider specifying a custom value and run the command until it exits with 0 or 4.
Specify --verbose to get detailed progress output during execution.
Specify --dry-run to print output but not commit any changes. The return code should be 4. (Since 20.0.0 Train)
Specify --instance to process a specific instance given its UUID. If specified the --max-count option has no effect. (Since 20.0.0 Train)
Specify --skip-port-allocations to skip the healing of the resource allocations of bound ports, e.g. healing bandwidth resource allocation for ports having minimum QoS policy rules attached. If your deployment does not use such a feature then the performance impact of querying neutron ports for each instance can be avoided with this flag. (Since 20.0.0 Train)
Specify --cell to process heal allocations within a specific cell. This is mutually exclusive with the --instance option.
Specify --force to forcefully heal single instance allocation. This option needs to be passed with --instance.
This command requires that the api_database.connection and placement configuration options are set. Placement API >= 1.28 is required.
Return Codes
Return code | Description |
0 | Command completed successfully and allocations were created. |
1 | --max-count was reached and there are more instances to process. |
2 | Unable to find a compute node record for a given instance. |
3 | Unable to create (or update) allocations for an instance against its compute node resource provider. |
4 | Command completed successfully but no allocations were created. |
5 | Unable to query ports from neutron |
6 | Unable to update ports in neutron |
7 | Cannot roll back neutron port updates. Manual steps needed. The error message will indicate which neutron ports need to be changed to clean up binding:profile of the port: 7.0 3.5 $ openstack port unset <port_uuid> --binding-profile allocation 168u 168u |
127 | Invalid input. |
255 | An unexpected error occurred. |
Specify --verbose to get detailed progress output during execution.
NOTE:
New in version Rocky.
Return Codes
Return code | Description |
0 | Successful run |
1 | A host was found with more than one matching compute node record |
2 | An unexpected error occurred while working with the placement API |
3 | Failed updating provider aggregates in placement |
4 | Host mappings not found for one or more host aggregate members |
5 | Compute node records not found for one or more hosts |
6 | Resource provider not found by uuid for a given host |
255 | An unexpected error occurred. |
You can also ask to delete all the orphaned allocations by specifying -delete.
Specify --verbose to get detailed progress output during execution.
This command requires that the api_database.connection and placement configuration options are set. Placement API >= 1.14 is required.
Return Codes
Return code | Description |
0 | No orphaned allocations were found |
1 | An unexpected error occurred |
3 | Orphaned allocations were found |
4 | All found orphaned allocations were deleted |
127 | Invalid input |
:nova-doc:`OpenStack Nova <>`
openstack@lists.openstack.org
2010-present, OpenStack Foundation
January 24, 2023 |