DUPLICITY(1) | User Manuals | DUPLICITY(1) |
duplicity - Encrypted incremental backup to local or remote storage.
For detailed descriptions for each command see chapter ACTIONS.
duplicity [full|incremental] [options] source_directory target_url
duplicity verify [options] [--compare-data] [--file-to-restore <relpath>] [--time time] source_url target_directory
duplicity collection-status [options] [--file-changed <relpath>] target_url
duplicity list-current-files [options] [--time time] target_url
duplicity [restore] [options] [--file-to-restore <relpath>] [--time time] source_url target_directory
duplicity remove-older-than <time> [options] [--force] target_url
duplicity remove-all-but-n-full <count> [options] [--force] target_url
duplicity remove-all-inc-of-but-n-full <count> [options] [--force] target_url
duplicity cleanup [options] [--force] target_url
duplicity replicate [options] [--time time] source_url target_url
Duplicity incrementally backs up files and folders into tar-format volumes encrypted with GnuPG and places them to a remote (or local) storage backend. See chapter URL FORMAT for a list of all supported backends and how to address them. Because duplicity uses librsync, incremental backups are space efficient and only record the parts of files that have changed since the last backup. Currently duplicity supports deleted files, full Unix permissions, uid/gid, directories, symbolic links, fifos, etc., but not hard links.
If you are backing up the root directory /, remember to --exclude /proc, or else duplicity will probably crash on the weird stuff in there.
Here is an example of a backup, using sftp to back up /home/me to some_dir on the other.host machine:
If the above is run repeatedly, the first will be a full backup, and subsequent ones will be incremental. To force a full backup, use the full action:
or enforcing a full every other time via --full-if-older-than <time> , e.g. a full every month:
Now suppose we accidentally delete /home/me and want to restore it the way it was at the time of last backup:
Duplicity enters restore mode because the URL comes before the local directory. If we wanted to restore just the file "Mail/article" in /home/me as it was three days ago into /home/me/restored_file:
The following command compares the latest backup with the current files:
Finally, duplicity recognizes several include/exclude options. For instance, the following will backup the root directory, but exclude /mnt, /tmp, and /proc:
Note that in this case the destination is the local directory /usr/local/backup. The following will backup only the /home and /etc directories under root:
Duplicity can also access a repository via ftp. If a user name is given, the environment variable FTP_PASSWORD is read to determine the password:
Duplicity knows action commands, which can be finetuned with
options.
The actions for backup (full,incr) and restoration (restore) can as well be
left out as duplicity detects in what mode it should switch to by the order
of target URL and local folder. If the target URL comes before the local
folder a restore is in order, is the local folder before target URL then
this folder is about to be backed up to the target URL.
If a backup is in order and old signatures can be found duplicity
automatically performs an incremental backup.
Note: The following explanations explain some but not all options that can be used in connection with that action command. Consult the OPTIONS section for more detailed informations.
When backing up or restoring, this option specifies that the local archive directory is to be created in path. If the archive directory is not specified, the default will be to create the archive directory in ~/.cache/duplicity/.
The archive directory can be shared between backups to multiple targets, because a subdirectory of the archive dir is used for individual backups (see --name ).
The combination of archive directory and backup name must be unique in order to separate the data of different backups.
The interaction between the --archive-dir and the --name options allows for four possible combinations for the location of the archive dir:
The same set of prefixes must be passed in on backup and restore.
If both global and type-specific prefixes are set, global prefix will go before type-specific prefixes.
See also A NOTE ON FILENAME PREFIXES
Please note that while ignored errors will be logged, there will be no summary at the end of the operation to tell you what was ignored, if anything. If this is used for emergency restoration of data, it is recommended that you run the backup in such a way that you can revisit the backup log (look for lines containing the string IGNORED_ERROR).
If you ever have to use this option for reasons that are not understood or understood but not your own responsibility, please contact duplicity maintainers. The need to use this option under production circumstances would normally be considered a bug.
file_blocksize = int((file_len / (2000 * 512)) * 512)
return min(file_blocksize, config.max_blocksize)
where config.max_blocksize defaults to 2048. If you specify a larger max_blocksize, your difftar files will be larger, but your sigtar files will be smaller. If you specify a smaller max_blocksize, the reverse occurs. The --max-blocksize option should be in multiples of 512.
If not specified, the default value is a hash of the backend URL.
duplicity restore --rename Documents/metal Music/metal sftp://uid@other.host/some_dir /home/me
duplicity --rsync-options="--partial-dir=.rsync-partial" /home/me rsync://uid@other.host/some_dir
This option does not apply when using the newer boto3 backend, which does not create buckets.
See also A NOTE ON AMAZON S3 below.
This may be much faster, at some cost to confidentiality.
With this option, anyone who can observe traffic between your computer and S3 will be able to tell: that you are using Duplicity, the name of the bucket, your AWS Access Key ID, the increment dates and the amount of data in each increment.
This option affects only the connection, not the GPG encryption of the backup increment files. Unless that is disabled, an observer will not be able to see the file names or contents.
This option is not available when using the newer boto3 backend.
See also A NOTE ON AMAZON S3 below.
This option has no effect when using the newer boto3 backend, which will always use new style subdomain bucket naming.
See also A NOTE ON AMAZON S3 below.
Glacier Deep Archive is only available when using the newer boto3 backend.
This has no effect when using the newer boto3 backend. Boto3 always attempts to multiprocessing when it is believed it will be more efficient.
See also A NOTE ON AMAZON S3 below.
This has no effect when using the newer boto3 backend.
See also A NOTE ON AMAZON S3 below.
This has no effect when using the newer boto3 backend.
See also A NOTE ON AMAZON S3 below.
This has no effect when using the newer boto3 backend.
See also A NOTE ON AMAZON S3 below.
This is currently only used in the newer boto3 backend.
This is currently only used in the newer boto3 backend.
example of a list:
duplicity --ssh-options="-oProtocol=2 -oIdentityFile='/my/backup/id'" /home/me scp://user@host/some_dir
example with multiple parameters:
duplicity --ssh-options="-oProtocol=2" --ssh-options="-oIdentityFile='/my/backup/id'" /home/me scp://user@host/some_dir
NOTE: The ssh paramiko backend currently supports only the -i or -oIdentityFile setting. If needed provide more host specific options via ssh_config file.
The options -v4, -vn and -vnotice are functionally equivalent, as are the mixed/upper-case versions -vN, -vNotice and -vNOTICE.
Duplicity uses the URL format (as standard as possible) to define data locations. The generic format for a URL is:
It is not recommended to expose the password on the command line since it could be revealed to anyone with permissions to do process listings, it is permitted however. Consider setting the environment variable FTP_PASSWORD instead, which is used by most, if not all backends, regardless of it's name.
In protocols that support it, the path may be preceded by a single slash, '/path', to represent a relative path to the target home directory, or preceded by a double slash, '//path', to represent an absolute filesystem path.
Note:
Formats of each of the URL schemes follow:
Amazon Drive Backend
See also A NOTE ON AMAZON DRIVE
Azure
See also A NOTE ON AZURE ACCESS
B2
Cloud Files (Rackspace)
See also A NOTE ON CLOUD FILES ACCESS
Dropbox
Make sure to read A NOTE ON DROPBOX ACCESS first!
Local file path
FISH (Files transferred over Shell protocol) over ssh
FTP
NOTE: use lftp+, ncftp+ prefixes to enforce a specific backend, default is lftp+ftp://...
Google Docs
NOTE: use pydrive+, gdata+ prefixes to enforce a specific backend, default is pydrive+gdocs://...
Google Cloud Storage
HSI
hubiC
See also A NOTE ON HUBIC
IMAP email storage
See also A NOTE ON IMAP
MEGA.nz cloud storage (only works for accounts created prior to November 2018, uses "megatools")
NOTE: if not given in the URL, relies on password being stored within $HOME/.megarc (as used by the "megatools" utilities)
MEGA.nz cloud storage (works for all MEGA accounts, uses "MEGAcmd" tools)
NOTE: despite "MEGAcmd" no longer uses a
configuration file, for convenience storing the user password this backend
searches it in the $HOME/.megav2rc file (same syntax as the old
$HOME/.megarc)
[Login]
Username = MEGA_USERNAME
Password = MEGA_PASSWORD
OneDrive Backend
Par2 Wrapper Backend
See also A NOTE ON PAR2 WRAPPER BACKEND
Rclone Backend
See also A NOTE ON RCLONE BACKEND
Rsync via daemon
Rsync over ssh (only key auth)
S3 storage (Amazon)
For details see A NOTE ON AMAZON S3 and see also A NOTE ON EUROPEAN S3 BUCKETS below.
SCP/SFTP access
defaults are paramiko+scp:// and paramiko+sftp://
alternatively try pexpect+scp://, pexpect+sftp://, lftp+sftp://
See also --ssh-askpass, --ssh-options and A NOTE ON SSH
BACKENDS.
Swift (Openstack)
See also A NOTE ON SWIFT (OPENSTACK OBJECT STORAGE) ACCESS
Public Cloud Archive (OVH)
See also A NOTE ON PCA ACCESS
Tahoe-LAFS
WebDAV
alternatively try lftp+webdav[s]://
pydrive
See also A NOTE ON PYDRIVE BACKEND below.
multi
See also A NOTE ON MULTI BACKEND below.
MediaFire
See also A NOTE ON MEDIAFIRE BACKEND below.
duplicity uses time strings in two places. Firstly, many of the files duplicity creates will have the time in their filenames in the w3 datetime format as described in a w3 note at http://www.w3.org/TR/NOTE-datetime. Basically they look like "2001-07-15T04:09:38-07:00", which means what it looks like. The "-07:00" section means the time zone is 7 hours behind UTC.
Secondly, the -t, --time, and --restore-time options take a time string, which can be given in any of several formats:
When duplicity is run, it searches through the given source directory and backs up all the files specified by the file selection system. The file selection system comprises a number of file selection conditions, which are set using one of the following command line options:
For instance,
is exactly the same as
because the include and exclude directives match exactly the same files, and the --include comes first, giving it precedence. Similarly,
would backup the /usr/local/bin directory (and its contents), but not /usr/local/doc.
The include, exclude, include-filelist, and exclude-filelist options accept some extended shell globbing patterns. These patterns can contain *, **, ?, and [...] (character ranges). As in a normal shell, * can be expanded to any string of characters not containing "/", ? expands to any character except "/", and [...] expands to a single character of those characters specified (ranges are acceptable). The new special pattern, **, expands to any string of characters whether or not it contains "/". Furthermore, if the pattern starts with "ignorecase:" (case insensitive), then this prefix will be removed and any character in the string can be replaced with an upper- or lowercase version of itself.
Remember that you may need to quote these characters when typing them into a shell, so the shell does not interpret the globbing patterns before duplicity sees them.
The --exclude pattern option matches a file if:
1. pattern can be expanded into the file's filename,
or
2. the file is inside a directory matched by the option.
Conversely, the --include pattern matches a file if:
1. pattern can be expanded into the file's filename,
or
2. the file is inside a directory matched by the option, or
3. the file is a directory which contains a file matched by the
option.
For example,
matches e.g. /usr/local, /usr/local/lib, and /usr/local/lib/netscape. It is the same as --exclude /usr/local --exclude '/usr/local/**'.
On the other hand
specifies that /usr, /usr/local, /usr/local/lib, and /usr/local/lib/netscape (but not /usr/doc) all be backed up. Thus you don't have to worry about including parent directories to make sure that included subdirectories have somewhere to go.
Finally,
would match a file like /usR/5fOO/hello/there/world.py. If it did match anything, it would also match /usr. If there is no existing file that the given pattern can be expanded into, the option will not match /usr alone.
The --include-filelist, and --exclude-filelist, options also introduce file selection conditions. They direct duplicity to read in a text file (either ASCII or UTF-8), each line of which is a file specification, and to include or exclude the matching files. Lines are separated by newlines or nulls, depending on whether the --null-separator switch was given. Each line in the filelist will be interpreted as a globbing pattern the way --include and --exclude options are interpreted, except that lines starting with "+ " are interpreted as include directives, even if found in a filelist referenced by --exclude-filelist. Similarly, lines starting with "- " exclude files even if they are found within an include filelist.
For example, if file "list.txt" contains the lines:
then --include-filelist list.txt would include /usr, /usr/local, and /usr/local/bin. It would exclude /usr/local/doc, /usr/local/doc/python, etc. It would also include /usr/local/man, as this is included within /user/local. Finally, it is undefined what happens with /var. A single file list should not contain conflicting file specifications.
Each line in the filelist will also be interpreted as a globbing pattern the way --include and --exclude options are interpreted. For instance, if the file "list.txt" contains the lines:
Then --include-filelist list.txt would be exactly the same as specifying --include dir/foo --include dir/bar --exclude ** on the command line.
Finally, the --include-regexp and --exclude-regexp options allow files to be included and excluded if their filenames match a python regular expression. Regular expression syntax is too complicated to explain here, but is covered in Python's library reference. Unlike the --include and --exclude options, the regular expression options don't match files containing or contained in matched files. So for instance
matches any files whose full pathnames contain 7 consecutive digits which aren't followed by 'foo'. However, it wouldn't match /home even if /home/ben/1234567 existed.
When backing up to Amazon S3, two backend implementations are available. The schemes "s3" and "s3+http" are implemented using the older boto library, which has been deprecated and is no longer supported. The "boto3+s3" scheme is based on the newer boto3 library. This new backend fixes several known limitations in the older backend, which have crept in as Amazon S3 has evolved while the deprecated boto library has not kept up.
The boto3 backend should behave largely the same as the older S3 backend, but there are some differences in the handling of some of the "S3" options. Additionally, there are some compatibility differences with the new backed. Because of these reasons, both backends have been retained for the time being. See the documentation for specific options regarding differences related to each backend.
The boto3 backend does not support bucket creation. This is a deliberate choice which simplifies the code, and side steps problems related to region selection. Additionally, it is probably not a good practice to give your backup role bucket creation rights. In most cases the role used for backups should probably be limited to specific buckets.
The boto3 backend only supports newer domain style buckets. Amazon is moving to deprecate the older bucket style, so migration is recommended. Use the older s3 backend for compatibility with backups stored in buckets using older naming conventions.
The boto3 backend does not currently support initiating restores from the glacier storage class. When restoring a backup from glacier or glacier deep archive, the backup files must first be restored out of band. There are multiple options when restoring backups from cold storage, which vary in both cost and speed. See Amazon's documentation for details.
The Azure backend requires the Microsoft Azure Storage SDK for Python to be installed on the system. See REQUIREMENTS above.
It uses environment variables for authentification: AZURE_ACCOUNT_NAME (required), AZURE_ACCOUNT_KEY (optional), AZURE_SHARED_ACCESS_SIGNATURE (optional). One of AZURE_ACCOUNT_KEY or AZURE_SHARED_ACCESS_SIGNATURE is required.
A container name must be a valid DNS name, conforming to the following naming rules:
Pyrax is Rackspace's next-generation Cloud management API, including Cloud Files access. The cfpyrax backend requires the pyrax library to be installed on the system. See REQUIREMENTS above.
Cloudfiles is Rackspace's now deprecated implementation of OpenStack Object Storage protocol. Users wishing to use Duplicity with Rackspace Cloud Files should migrate to the new Pyrax plugin to ensure support.
The backend requires python-cloudfiles to be installed on the system. See REQUIREMENTS above.
It uses three environment variables for authentification: CLOUDFILES_USERNAME (required), CLOUDFILES_APIKEY (required), CLOUDFILES_AUTHURL (optional)
If CLOUDFILES_AUTHURL is unspecified it will default to the value provided by python-cloudfiles, which points to rackspace, hence this value must be set in order to use other cloud files providers.
Amazon S3 provides the ability to choose the location of a bucket upon its creation. The purpose is to enable the user to choose a location which is better located network topologically relative to the user, because it may allow for faster data transfers.
duplicity will create a new bucket the first time a bucket access is attempted. At this point, the bucket will be created in Europe if --s3-european-buckets was given. For reasons having to do with how the Amazon S3 service works, this also requires the use of the --s3-use-new-style option. This option turns on subdomain based bucket addressing in S3. The details are beyond the scope of this man page, but it is important to know that your bucket must not contain upper case letters or any other characters that are not valid parts of a hostname. Consequently, for reasons of backwards compatibility, use of subdomain based bucket addressing is not enabled by default.
Note that you will need to use --s3-use-new-style for all operations on European buckets; not just upon initial creation.
You only need to use --s3-european-buckets upon initial creation, but you may may use it at all times for consistency.
Further note that when creating a new European bucket, it can take a while before the bucket is fully accessible. At the time of this writing it is unclear to what extent this is an expected feature of Amazon S3, but in practice you may experience timeouts, socket errors or HTTP errors when trying to upload files to your newly created bucket. Give it a few minutes and the bucket should function normally.
Filename prefixes can be used in multi backend with mirror mode to define affinity rules. They can also be used in conjunction with S3 lifecycle rules to transition archive files to Glacier, while keeping metadata (signature and manifest files) on S3.
Duplicity does not require access to archive files except when restoring from backup.
Support for Google Cloud Storage relies on its Interoperable Access, which must be enabled for your account. Once enabled, you can generate Interoperable Storage Access Keys and pass them to duplicity via the GS_ACCESS_KEY_ID and GS_SECRET_ACCESS_KEY environment variables. Alternatively, you can run gsutil config -a to have the Google Cloud Storage utility populate the ~/.boto configuration file.
Enable Interoperable Access:
https://code.google.com/apis/console#:storage
Create Access Keys: https://code.google.com/apis/console#:storage:legacy
The hubic backend requires the pyrax library to be installed on the system. See REQUIREMENTS above. You will need to set your credentials for hubiC in a file called ~/.hubic_credentials, following this pattern:
An IMAP account can be used as a target for the upload. The userid may be specified and the password will be requested.
The from_address_prefix may be specified (and probably should be). The text will be used as the "From" address in the IMAP server. Then on a restore (or list) command the from_address_prefix will distinguish between different backups.
The multi backend allows duplicity to combine the storage available in more than one backend store (e.g., you can store across a google drive account and a onedrive account to get effectively the combined storage available in both). The URL path specifies a JSON formated config file containing a list of the backends it will use. The URL may also specify "query" parameters to configure overall behavior. Each element of the list must have a "url" element, and may also contain an optional "description" and an optional "env" list of environment variables used to configure that backend.
Query parameters come after the file URL in standard HTTP format for example:
multi:///path/to/config.json?mode=mirror&onfail=abort multi:///path/to/config.json?mode=stripe&onfail=continue multi:///path/to/config.json?onfail=abort&mode=stripe multi:///path/to/config.json?onfail=abort
[
{
"description": "a comment about the backend"
"url": "abackend://myuser@domain.com/backup",
"env": [
{
"name" : "MYENV",
"value" : "xyz"
},
{
"name" : "FOO",
"value" : "bar"
}
],
"prefixes": ["prefix1_", "prefix2_"]
},
{
"url": "file:///path/to/dir"
} ]
Par2 Wrapper Backend can be used in combination with all other backends to create recovery files. Just add par2+ before a regular scheme (e.g. par2+ftp://user@host/dir or par2+s3+http://bucket_name ). This will create par2 recovery files for each archive and upload them all to the wrapped backend.
Before restoring, archives will be verified. Corrupt archives will be repaired on the fly if there are enough recovery blocks available.
Use --par2-redundancy percent to adjust the size (and redundancy) of recovery files in percent.
The pydrive backend requires Python PyDrive package to be installed on the system. See REQUIREMENTS above.
There are two ways to use PyDrive: with a regular account or with a "service account". With a service account, a separate account is created, that is only accessible with Google APIs and not a web login. With a regular account, you can store backups in your normal Google Drive.
To use a service account, go to the Google developers console at https://console.developers.google.com. Create a project, and make sure Drive API is enabled for the project. Under "APIs and auth", click Create New Client ID, then select Service Account with P12 key.
Download the .p12 key file of the account and convert it to the
.pem format:
openssl pkcs12 -in XXX.p12 -nodes -nocerts > pydriveprivatekey.pem
The content of .pem file should be passed to GOOGLE_DRIVE_ACCOUNT_KEY environment variable for authentification.
The email address of the account will be used as part of URL. See URL FORMAT above.
The alternative is to use a regular account. To do this, start as above, but when creating a new Client ID, select "Installed application" of type "Other". Create a file with the following content, and pass its filename in the GOOGLE_DRIVE_SETTINGS environment variable:
client_config_backend: settings client_config:
client_id: <Client ID from developers' console>
client_secret: <Client secret from developers' console> save_credentials: True save_credentials_backend: file save_credentials_file: <filename to cache credentials> get_refresh_token: True
In this scenario, the username and host parts of the URL play no role; only the path matters. During the first run, you will be prompted to visit an URL in your browser to grant access to your drive. Once granted, you will receive a verification code to paste back into Duplicity. The credentials are then cached in the file references above for future use.
Rclone is a powerful command line program to sync files and directories to and from various cloud storage providers.
Once you have configured an rclone remote via
and successfully set up a remote (e.g. gdrive for Google Drive), assuming you can list your remote files with
you can start your backup with
Please note the slash after the second colon. Some storage provider will work with or without slash after colon, but some other will not. Since duplicity will complain about malformed URL if a slash is not present, always put it after the colon, and the backend will handle it for you.
The ssh backends support sftp and scp/ssh
transport protocols. This is a known user-confusing issue as these are
fundamentally different. If you plan to access your backend via one of those
please inform yourself about the requirements for a server to support
sftp or scp/ssh access. To make it even more confusing the
user can choose between several ssh backends via a scheme prefix: paramiko+
(default), pexpect+, lftp+... .
paramiko & pexpect support --use-scp, --ssh-askpass and
--ssh-options. Only the pexpect backend allows to define
--scp-command and --sftp-command.
SSH paramiko backend (default) is a complete reimplementation of ssh protocols natively in python. Advantages are speed and maintainability. Minor disadvantage is that extra packages are needed as listed in REQUIREMENTS above. In sftp (default) mode all operations are done via the according sftp commands. In scp mode ( --use-scp ) though scp access is used for put/get operations but listing is done via ssh remote shell.
SSH pexpect backend is the legacy ssh backend using the command line ssh binaries via pexpect. Older versions used scp for get and put operations and sftp for list and delete operations. The current version uses sftp for all four supported operations, unless the --use-scp option is used to revert to old behavior.
SSH lftp backend is simply there because lftp can interact with the ssh cmd line binaries. It is meant as a last resort in case the above options fail for some reason.
Why use sftp instead of scp? The change to sftp was made in order to allow the remote system to chroot the backup, thus providing better security and because it does not suffer from shell quoting issues like scp. Scp also does not support any kind of file listing, so sftp or ssh access will always be needed in addition for this backend mode to work properly. Sftp does not have these limitations but needs an sftp service running on the backend server, which is sometimes not an option.
Certificate verification as implemented right now [02.2016] only
in the webdav and lftp backends. older pythons 2.7.8- and older lftp
binaries need a file based database of certification authority certificates
(cacert file).
Newer python 2.7.9+ and recent lftp versions however support the system
default certificates (usually in /etc/ssl/certs) and also giving an
alternative ca cert folder via --ssl-cacert-path.
The cacert file has to be a PEM formatted text file as currently provided by the CURL project. See
After creating/retrieving a valid cacert file you should copy it to either
Duplicity searches it there in the same order and will fail if it can't find it. You can however specify the option --ssl-cacert-file <file> to point duplicity to a copy in a different location.
Finally there is the --ssl-no-check-certificate option to disable certificate verification alltogether, in case some ssl library is missing or verification is not wanted. Use it with care, as even with self signed servers manually providing the private ca certificate is definitely the safer option.
Swift is the OpenStack Object Storage service.
The backend requires python-switclient to be installed on the system.
python-keystoneclient is also needed to use OpenStack's Keystone Identity
service. See REQUIREMENTS above.
It uses following environment variables for authentification: SWIFT_USERNAME (required), SWIFT_PASSWORD (required), SWIFT_AUTHURL (required), SWIFT_USERID (required, only for IBM Bluemix ObjectStorage), SWIFT_TENANTID (required, only for IBM Bluemix ObjectStorage), SWIFT_REGIONNAME (required, only for IBM Bluemix ObjectStorage), SWIFT_TENANTNAME (optional, the tenant can be included in the username)
If the user was previously authenticated, the following environment variables can be used instead: SWIFT_PREAUTHURL (required), SWIFT_PREAUTHTOKEN (required)
If SWIFT_AUTHVERSION is unspecified, it will default to version 1.
PCA is a long-term data archival solution by OVH. It runs a slightly modified version of Openstack Swift introducing latency in the data retrieval process. It is a good pick for a multi backend configuration where receiving volumes while an other backend is used to store manifests and signatures.
The backend requires python-switclient to be installed on the system. python-keystoneclient is also needed to interact with OpenStack's Keystone Identity service. See REQUIREMENTS above.
It uses following environment variables for authentification: PCA_USERNAME (required), PCA_PASSWORD (required), PCA_AUTHURL (required), PCA_USERID (optional), PCA_TENANTID (optional, but either the tenant name or tenant id must be supplied) PCA_REGIONNAME (optional), PCA_TENANTNAME (optional, but either the tenant name or tenant id must be supplied)
If the user was previously authenticated, the following environment variables can be used instead: PCA_PREAUTHURL (required), PCA_PREAUTHTOKEN (required)
If PCA_AUTHVERSION is unspecified, it will default to version 2.
This backend requires mediafire python library to be installed on the system. See REQUIREMENTS.
Use URL escaping for username (and password, if provided via command line):
The destination folder will be created for you if it does not exist.
Signing and symmetrically encrypt at the same time with the gpg binary on the command line, as used within duplicity, is a specifically challenging issue. Tests showed that the following combinations proved working.
1. Setup gpg-agent properly. Use the option --use-agent and enter both passphrases (symmetric and sign key) in the gpg-agent's dialog.
2. Use a PASSPHRASE for symmetric encryption of your choice but the signing key has an empty passphrase.
3. The used PASSPHRASE for symmetric encryption and the passphrase of the signing key are identical.
Hard links currently unsupported (they will be treated as non-linked regular files).
Bad signatures will be treated as empty instead of logging appropriate error message.
This section describes duplicity's basic operation and the format of its data files. It should not be necessary to read this section to use duplicity.
The files used by duplicity to store backup data are tarfiles in GNU tar format. They can be produced independently by rdiffdir(1). For incremental backups, new files are saved normally in the tarfile. But when a file changes, instead of storing a complete copy of the file, only a diff is stored, as generated by rdiff(1). If a file is deleted, a 0 length file is stored in the tar. It is possible to restore a duplicity archive "manually" by using tar and then cp, rdiff, and rm as necessary. These duplicity archives have the extension difftar.
Both full and incremental backup sets have the same format. In effect, a full backup set is an incremental one generated from an empty signature (see below). The files in full backup sets will start with duplicity-full while the incremental sets start with duplicity-inc. When restoring, duplicity applies patches in order, so deleting, for instance, a full backup set may make related incremental backup sets unusable.
In order to determine which files have been deleted, and to calculate diffs for changed files, duplicity needs to process information about previous sessions. It stores this information in the form of tarfiles where each entry's data contains the signature (as produced by rdiff) of the file instead of the file's contents. These signature sets have the extension sigtar.
Signature files are not required to restore a backup set, but without an up-to-date signature, duplicity cannot append an incremental backup to an existing archive.
To save bandwidth, duplicity generates full signature sets and incremental signature sets. A full signature set is generated for each full backup, and an incremental one for each incremental backup. These start with duplicity-full-signatures and duplicity-new-signatures respectively. These signatures will be stored both locally and remotely. The remote signatures will be encrypted if encryption is enabled. The local signatures will not be encrypted and stored in the archive dir (see --archive-dir ).
Duplicity requires a POSIX-like operating system with a python interpreter version 2.6+ installed. It is best used under GNU/Linux.
Some backends also require additional components (probably available as packages for your specific platform):
Most backends were contributed individually. Information about their authorship may be found in the according file's header.
Also we'd like to thank everybody posting issues to the mailing list or on launchpad, sending in patches or contributing otherwise. Duplicity wouldn't be as stable and useful if it weren't for you.
A special thanks goes to rsync.net, a Cloud Storage provider with explicit support for duplicity, for several monetary donations and for providing a special "duplicity friends" rate for their offsite backup service. Email info@rsync.net for details.
November 11, 2020 | Version 0.8.17 |