STOREBACKUP(1) | User Contributed Perl Documentation | STOREBACKUP(1) |
storeBackup.pl - fancy compressing managing checksumming hard-linking cp -ua
This program copies trees to another location. Every file copied is potentially compressed (see --exceptSuffix). The backups after the first backup will compare the files with an md5 checksum with the last stored version. If they are equal, it will only make an hard link to it. It will also check mtime, ctime and size to recognize idential files in older backups very fast. It can also backup big image files fast and efficiently on a per block basis (data deduplication).
You can overwrite options in the configuration file on the command line.
$prog --help or $prog -g configFile or $prog [-f configFile] [-s sourceDir] [-b backupDirectory] [-S series] [--print] [-T tmpdir] [-L lockFile] [--unlockBeforeDel] [--exceptDirs dir1,dir2,dir3] [--contExceptDirsErr] [--includeDirs dir1,dir2,dir3] [--exceptRule rule] [--includeRule rule] [--exceptTypes types] [--cpIsGnu] [--linkSymlinks] [--precommand job] [--postcommand job] [--followLinks depth] [--highLatency] [--ignorePerms] [--lateLinks [--lateCompress]] [--checkBlocksSuffix suffix] [--checkBlocksMinSize size] [--checkBlocksBS] [--checkBlocksRule0 rule [--checkBlocksBS0 size] [--checkBlocksCompr0] [--checkBlocksRead0 filter] [--checkBlocksParallel0]] [--checkBlocksRule1 rule [--checkBlocksBS1 size] [--checkBlocksCompr1] [--checkBlocksRead1 filter] [--checkBlocksParallel1]] [--checkBlocksRule2 rule [--checkBlocksBS2 size] [--checkBlocksCompr2] [--checkBlocksRead2 filter] [--checkBlocksParallel2]] [--checkBlocksRule3 rule [--checkBlocksBS3 size] [--checkBlocksCompr3] [--checkBlocksRead3 filter] [--checkBlocksParallel3]] [--checkBlocksRule4 rule [--checkBlocksBS4 size] [--checkBlocksCompr4] [--checkBlocksRead4 filter] [--checkBlocksParallel4]] [--checkDevices0 list [--checkDevicesDir0] [--checkDevicesBS0] [checkDevicesCompr0] [--checkDevicesParallel0]] [--checkDevices1 list [--checkDevicesDir1] [--checkDevicesBS1] [checkDevicesCompr1] [--checkDevicesParallel1]] [--checkDevices2 list [--checkDevicesDir2] [--checkDevicesBS2] [checkDevicesCompr2] [--checkDevicesParallel2]] [--checkDevices3 list [--checkDevicesDir3] [--checkDevicesBS3] [checkDevicesCompr3] [--checkDevicesParallel3]] [--checkDevices4 list [--checkDevicesDir4] [--checkDevicesBS4] [checkDevicesCompr4] [--checkDevicesParallel1]] [--saveRAM] [-c compress] [-u uncompress] [-p postfix] [--noCompress number] [--queueCompress number] [--noCopy number] [--queueCopy number] [--withUserGroupStat] [--userGroupStatFile filename] [--exceptSuffix suffixes] [--addExceptSuffix suffixes] [--minCompressSize size] [--comprRule] [--doNotCompressMD5File] [--chmodMD5File] [-v] [-d level][--progressReport number] [--printDepth] [--ignoreReadError] [--suppressWarning key] [--linkToRecent name] [--doNotDelete] [--deleteNotFinishedDirs] [--resetAtime] [--keepAll timePeriod] [--keepWeekday entry] [[--keepFirstOfYear] [--keepLastOfYear] [--keepFirstOfMonth] [--keepLastOfMonth] [--firstDayOfWeek day] [--keepFirstOfWeek] [--keepLastOfWeek] [--keepDuplicate] [--keepMinNumber] [--keepMaxNumber] | [--keepRelative] ] [-l logFile [--plusLogStdout] [--suppressTime] [-m maxFilelen] [[-n noOfOldFiles] | [--saveLogs]] [--compressWith compressprog]] [--logInBackupDir [--compressLogInBackupDir] [--logInBackupDirFileName logFile]] [otherBackupSeries ...]
show this help
generate a template of the configuration file
print configuration read from configuration file or command line and stop
configuration file (instead of or additionally to options on command line)
source directory (must exist)
top level directory of all backups (must exist)
series directory, default is 'default' relative path from backupDir
directory for temporary files, default is </tmp>
lock file, if exists, new instances will finish if an old is already running, default is $lockFile
remove the lock file before deleting old backups default is to delete the lock file after removing old backups
directories to except from backing up (relative path), wildcards are possible and should be quoted to avoid replacements by the shell use this parameter multiple times for multiple directories
continue if one or more of the exceptional directories do not exist (default is to stop processing)
directories to include in the backup (relative path), wildcards are possible and have to be quoted use this parameter multiple times for multiple directories
Files to exclude from backing up. see README: 'including / excluding files and directories'
Files to include in the backug up - like exceptRule see README: 'including / excluding files and directories'
write a file name .storeBackup.notSaved.bz2 with the names of all skipped files
do not save the specified type of files, allowed: Sbcfpl S - file is a socket b - file is a block special file c - file is a character special file f - file is a plain file p - file is a named pipe l - file is a symbolic link Sbc can only be saved when using option [cpIsGnu]
Activate this option if your systems cp is a full-featured GNU version. In this case you will be able to also backup several special file types like sockets.
hard link identical symlinks
exec job before starting the backup, checks lockFile (-L) before starting (e.g. can be used for rsync) stops execution if job returns exit status != 0 This parameter is parsed like a line in the configuration file and normally has to be quoted.
exec job after finishing the backup, but before erasing of old backups reports if job returns exit status != 0 This parameter is parsed like a line in the configuration file and normally has to be quoted.
follow symbolic links like directories up to depth default = 0 -> do not follow links
use this for a very high latency line (eg. vpn over the internet) for better parallelization
If this option chosen, files will not necessarily have the same permissions and owner as the originals. This speeds up backups on network drives a lot. Recovery with storeBackupRecover.pl will restore them correctly.
do *not* write hard links to existing files in the backup during the backup you have to call the program storeBackupWriteLateLink.pl later on your server if you set this flag to 'yes' you have to run storeBackupUpdateBackup.pl later - see description for that program
only in combination with --lateLinks compression from files >= minCompressSize will be done later, the file is (temporarily) copied into the backup
Files with suffix for which storeBackup will make an md5 check on blocks of that file. Executed after --checkBlocksRule(n) This option can be repeated multiple times
Only check files specified in --checkBlocksSuffix if there file size is at least this value, default is 100M
Block size for files specified with --checkBlocksSuffix Default is $checkBlocksBSdefault (1 megabyte)
if set, the blocks generated due to checkBlocksSuffix are compressed
Files for which storeBackup will make an md5 check depending on blocks of that file.
Block size for option checkBlocksRule Default is $checkBlocksBSdefault (1 megabyte)
if set, the blocks generated due to this rule are compressed
Filter for reading the file to treat as a blocked file eg. 'gzip -d' if the file is compressed. Default is no read filter. This parameter is parsed like the line in the configuration file and normally has to be quoted, eg. 'gzip -9'
Read files specified here in parallel to "normal" ones. This only makes sense if they are on a different disk. Default value is 'no'
List of devices for md5 ckeck depending on blocks of these devices
Directory where to store the backup of the device
Block size of option checkDevices0, default is 1M (1 megabyte)
Compress blocks resulting from option checkDevices0
Read devices specified in parallel to the rest of the backup. This only makes sense if they are on a different disk. Default value is 'no'
write temporary dbm files in --tmpdir use this if you do not have enough RAM
compress command (with options), default is <bzip2> This parameter is parsed like the line in the configuration file and normally has to be quoted, eg. 'gzip -9'
uncompress command (with options), default is <bzip2 -d> This parameter is parsed like the line in the configuration file and normally has to be quoted, eg. 'gzip -d'
postfix to add after compression, default is <.bz2>
do not compress files with the following suffix (uppercase included): ('\.zip', '\.bz2', '\.gz', '\.tgz', '\.jpg', '\.gif', '\.tiff', '\.tif', '\.mpeg', '\.mpg', '\.mp3', '\.ogg', '\.gpg', '\.png') This option can be repeated multiple times If you do not want any compression, set this option to '.*'
like --exceptSuffix, but do not replace defaults, add
Files smaller than this size will never be compressed but copied
alternative to --exceptSuffix and minCompressSize: definition of a rule which files will be compressed
maximal number of parallel compress operations, default = chosen automatically
length of queue to store files before compression, default = 1000
maximal number of parallel copy operations, default = 1
length of queue to store files before copying, default = 1000
write statistics about used space in log file
write statistics about used space in name file will be overridden each time
do not compress .md5CheckSumFile
permissions of .md5CheckSumFile and corresponding .storeBackupLinks directory, default is 0600
verbose messages
generate debug messages, levels are 0 (none, default), 1 (some), 2 (many) messages, especially in --exceptRule and --includeRule
reset access time in the source directory - but this will change ctime (time of last modification of file status information)
check only, do not delete any backup
delete old backups which have not been finished this will not happen if doNotDelete is set
keep backups which are not older than the specified amount of time. This is like a default value for all days in --keepWeekday. Begins deleting at the end of the script the time range has to be specified in format 'dhms', e.g. 10d4h means 10 days and 4 hours default = 20d
keep backups for the specified days for the specified amount of time. Overwrites the default values chosen in --keepAll. 'Mon,Wed:40d Sat:60d10m' means: keep backups from Mon and Wed 40days + 5mins keep backups from Sat 60days + 10mins keep backups from the rest of the days like spcified in --keepAll (default $keepAll) if you also use the 'archive flag' it means to not delete the affected directories via --keepMaxNumber: a10d4h means 10 days and 4 hours and 'archive flag' e.g. 'Mon,Wed:a40d5m Sat:60d10m' means: keep backups from Mon and Wed 40days + 5mins + 'archive' keep backups from Sat 60days + 10mins keep backups from the rest of the days like specified in --keepAll (default 30d)
do not delete the first backup of a year format is timePeriod with possible 'archive flag'
do not delete the last backup of a year format is timePeriod with possible 'archive flag'
do not delete the first backup of a month format is timePeriod with possible 'archive flag'
do not delete the last backup of a month format is timePeriod with possible 'archive flag'
default: 'Sun'. This value is used for calculating --keepFirstOfWeek and --keepLastOfWeek
do not delete the first backup of a week format is timePeriod with possible 'archive flag'
do not delete the last backup of a week format is timePeriod with possible 'archive flag'
keep multiple backups of one day up to timePeriod format is timePeriod, 'archive flag' is not possible default = 7d
Keep that miminum of backups. Multiple backups of one day are counted as one backup. Default is 10.
Try to keep only that maximum of backups. If you have more backups, the following sequence of deleting will happen: - delete all duplicates of a day, beginning with the old once, except the last of every day - if this is not enough, delete the rest of the backups beginning with the oldest, but *never* a backup with the 'archive flag' or the last backup
Alternative deletion scheme. If you use this option, all other keep options are ignored. Preserves backups depending on their *relative* age. Example: -R '1d 7d 61d 92b' will (try to) ensure that there is always - One backup between 1 day and 7 days old - One backup between 5 days and 2 months old - One backup between ~2 months and ~3 months old If there is no backup for a specified timespan (e.g. because the last backup was done more than 2 weeks ago) the next older backup will be used for this timespan.
print progress report after each 'number' files
print depth of actual read directory during backup
ignore read errors in source directory; not readable directories do not cause storeBackup.pl to stop processing
suppress (unwanted) warnings in the log files; to suppress warnings, the following keys can be used: excDir (suppresses the warning that excluded directories do not exist) fileChange (suppresses the warning that a file has changed during the backup) crSeries (suppresses the warning that storeBackup had to create the 'default' series) hashCollision (suppresses the warning if a possible hash collision is detected) fileNameWithLineFeed (suppresses the warning if a filename contains a line feed) This option can be repeated multiple times on the command line.
after a successful backup, set a symbolic link to that backup and delete existing older links with the same name
log file (default is STDOUT)
if you specify a log file with --logFile you can additionally print the output to STDOUT with this flag
suppress output of time in logfile
maximal length of log file, default = 1e6
number of old log files, default = 5
save log files with date and time instead of deleting the old (with [-noOldFiles])
compress saved log files (e.g. with 'gzip -9') default is 'bzip2' This parameter is parsed like a line in the configuration file and normally has to be quoted.
write log file (also) in the backup directory Be aware that this log does not contain all error messages of the one specified with --logFile!
compress the log file in the backup directory
filename to use for writing the above log file, default is .storeBackup.log
List of other backup series to consider for hard linking. Relative path from backupDir! Format (examples): backupSeries/2002.08.29_08.25.28 -> consider this backup or 0:backupSeries ->last (youngest) in <backupDir>/backupSeries 1:backupSeries ->one before last in <backupDir>/backupSeries n:backupSeries -> n'th before last in <backupDir>/backupSeries 3-5:backupSeries -> 3rd, 4th and 5th in <backupDir>/backupSeries all:backupSeries -> all in <backupDir>/backupSeries default is to link to the last backup in every series
Copyright (c) 2000,2004,2008-2009,2012 by Heinz-Josef Claes (see README). Published under the GNU General Public License or any later version.
2020-07-08 | perl v5.30.3 |