=head1 OmniPITR - omnipitr-backup-slave =head2 USAGE /some/path/omnipitr/bin/omnipitr-backup-slave [options] Options: =over =item --data-dir (-D) Where PostgreSQL datadir is located (path) =item --source (-s) Where are WAL files delivered by L and used by L You can also specify compression for source. Check L section of the doc. =item --dst-local (-dl) Where to copy the hot backup files on current server (you can provide many of these). You can also specify compression per-destination. Check L section of the doc. =item --dst-remote (-dr) Where to copy the hot backup files on remote server. Supported ways to transport files are rsync and rsync over ssh. Please see L for more information (you can provide many of these) You can also specify compression per-destination. Check L section of the doc. =item --temp-dir (-t) Where to create temporary files (defaults to /tmp or I<$TMPDIR> environment variable location) =item --log (-l) Name of logfile (actually template, as it supports %% L markers. Unfortunately due to the %x usage by PostgreSQL, We cannot use %% macros directly. Instead - any occurence of ^ character in log dir will be first changed to %, and later on passed to strftime. =item --filename-template (-f) Template for naming output files. Check L section for details. =item --pid-file Name of file to use for pidfile. If it is specified, than only one copy of I (with this pidfile) can run at the same time. Trying to run second copy of I will result in an error. =item --removal-pause-trigger (-p) name of file to be created to pause removal of old/obsolete segments. Should be the same as condigured removal-pause-trigger for L. =item --verbose (-v) Log verbosely what is happening. =item --not-nice (-nn) Do not use nice for compressions. =item --gzip-path (-gp) Full path to gzip program - in case you can't set proper PATH environment variable. =item --bzip2-path (-bp) Full path to bzip2 program - in case you can't set proper PATH environment variable. =item --lzma-path (-lp) Full path to lzma program - in case you can't set proper PATH environment variable. =item --nice-path (-np) Full path to nice program - in case you can't set proper PATH environment variable. =item --tar-path (-tp) Full path to tar program - in case you can't set proper PATH environment variable. =item --rsync-path (-rp) Full path to rsync program - in case you can't set proper PATH environment variable. =item --pgcontroldata-path (-pp) Full path to pg_controldata program - in case you can't set proper PATH environment variable. =back =head2 DESCRIPTION Running this program should be done by cronjob, or manually by database administrator. As a result of running it there are 2 files, usually named HOST-data-YYYY-MM-DD.tar and HOST-xlog-YYYY-MM-DD.tar. These files can be optionally compressed and delivered to many places - both local (on the same server) or remote (via rsync). Which options should be given depends only on installation, but generally you will need at least: =over =item * --data-dir Backup will process files in this directory. =item * --log to make sure that information is logged someplace about archiving progress =item * one of --dst-local or --dst-remote to specify where to send the backup files to =back Of course you can provide many --dst-local or many --dst-remote or many mix of these. Generally omnipitr-backup-slave will try to deliver WAL segment to all destinations. In case remote destination will fail, omnipitr-backup-slave will retry 3 times, with 5 minute delay between tries. In case of errors when writing to local destination - it is skipped, and error is logged. Backups will be transferred to destinations in this order: =over =item 1. All B destinations, in order provided in command line =item 2. All B destinations, in order provided in command line =back =head3 Remote destination specification I delivers backup files to destination using rsync program. Both direct-rsync and rsync-over-ssh are supported (it's better to use direct rsync - it uses less resources due to lack of encryption. Destination url/location should be in a format that is usable by I program. For example you can use: =over =item * rsync://user@remote_host/module/path/ =item * host:/path/ =back To allow remote delivery you need to have rsync program. In case you're using rsync over ssh, I program has also to be available. In case your rsync/ssh programs are in custom directories simply set I<$PATH> environemnt variable before starting PostgreSQL. =head2 COMPRESSION Every destination (and source of xlogs) can have specified compression. To use it you should prefix destination path/url with compression type followed by '=' sign. Allowed compression types: =over =item * gzip Compresses with gzip program, used file extension is .gz =item * bzip2 Compresses with bzip2 program, used file extension is .bz2 =item * lzma Compresses with lzma program, used file extension is .lzma =back If you want to pass any extra arguments to compression program, you can either: =over =item * make a wrapper Write a program/script that will be named in the same way your actual compression program is named, but adding some parameters to call =item * use environment variables All of supported compression programs use environment variables: =over =item * gzip - GZIP =item * bzip2 - BZIP2 =item * lzma - XZ_OPT =back For details - please consult manual to your choosen compression tool. =back B =head2 FILENAMES Naming of files for backups might be important depending on deployment. Generally, generated filenames are named using templates, with default template being: __HOSTNAME__-__FILETYPE__-^Y-^m-^d.tar__CEXT__ Within template (specified with --filename-template option) you can use following markers: =over =item * __HOSTNAME__ Name of server backup is made on - as reported by L program. =item * __FILETYPE__ It is actually required to have __FILETYPE__ - it specifies whether the file contains data (data) or xlog segments (xlog) =item * __CEXT__ Based on compression algorithm choosen for given delivery. Can be empty (no compression), or contains dot (.) and literal extension associated with choosen compression program. =item * any ^? markers like in L call, but ^ will be first changed to %. =back Filename template is evaluated at start, so any timestamp (^? markers) will relate to date/time of beginning of backup process. =head2 TABLESPACES If I detects additional tablespaces, they will be also compressed to generated tarball. Since the full path to the tablespace directory is important, and should be preserved, and normally tar doesn't let you store files which path starts with "/" (as it would be dangerous), I uses the following approach: all tablespaces will be stored in tar, and upon extraction they will be put in the directory "tablespaces", and under it - there will be the full path to the tablespace directory. For example: Assuming PostgreSQL PGDATA is in /var/lib/pgsql/data, and it has 3 extra tablespaces placed in: =over =item * /mnt/san/tablespace =item * /home/whatever/xxx =item * /media/ssd =back generated DATA tarball will contain 2 directories: =item * data - copy of /var/lib/pgsql/data =item * tablespaces - which contains full directory structure leading to: =over =item * tablespaces/mnt/san/tablespace - copy of /mnt/san/tablespace =item * tablespaces/home/whatever/xxx - copy of /home/whatever/xxx =item * tablespaces/media/ssd - copy of /media/ssd =back =back Thanks to this approach, if you'll create symlink "tablespaces" pointing to root directory (I) B exploding tarball - all tablespace files will be created already in the correct places. This is of course not necessary, but will help if you'd ever need to recover from such backup. =head2 EXAMPLES =head3 Minimal setup, with copying file to local directory: /.../omnipitr-backup-slave -D /mnt/data -l /var/log/omnipitr/backup.log -dl /mnt/backups/ =head3 Minimal setup, with compression, and copying file to remote directory over rsync: /.../omnipitr-backup-slave -D /mnt/data/ -l /var/log/omnipitr/backup.log -dr bzip2=rsync://slave/postgres/backups/ =head3 2 remote, compressed destinations, 1 local, with auto rotated logfile, and modified filenames /.../omnipitr-backup-slave -D /mnt/data/ -l /var/log/omnipitr/backup-^Y-^m-^d.log -dr bzip2=rsync://slave/postgres/backups/ -dr gzip=backups:/mnt/hotbackups/ -dl /mnt/backups/ -f "main-__FILETYPE__-^Y^m^d_^H^M^S.tar__CEXT__" =head2 IMPORTANT NOTICES =over =item * If you're using compressed source dir (wal archive) - I has to uncompress all of xlogs before putting them in .tar.XX. This means that you should have enough free disk space for this purpose in the place where you create temporary directories. Alternarively - do not use compression for sending WAL segments to standby server - in this case - decompression will not be necessary. =item * I compresses whole source directory - i.e. all files from there. This means that if you're using delay recovery (-w option to I) - there will be more files there, and backup will be larger. This is especially important if you're using compressed wal archive (please see note above) =back =head2 COPYRIGHT The OmniPITR project is Copyright (c) 2009-2010 OmniTI. All rights reserved.