diff --git a/Example Proxmox.md b/Example Proxmox.md new file mode 100644 index 0000000..3284ba9 --- /dev/null +++ b/Example Proxmox.md @@ -0,0 +1,81 @@ +## Backup a proxmox cluster with HA replication + +Due to the nature of proxmox we had to make a few enhancements to zfs-autobackup. This will probably also benefit other systems that use their own replication in combination with zfs-autobackup. + +All data under rpool/data can be on multiple nodes of the cluster. The naming of those filesystem is unique over the whole cluster. Because of this we should backup rpool/data of all nodes to the same destination. This way we wont have duplicate backups of the filesystems that are replicated. Because of various options, you can even migrate hosts and zfs-autobackup will be fine. (and it will get the next backup from the new node automatically) + +In the example below we have 3 nodes, named pve1, pve2 and pve3. + +### Preparing the proxmox nodes + +No preparation is needed, the script will take care of everything. You only need to setup the ssh keys, so that the backup server can access the proxmox server. + +TIP: make sure your backup server is firewalled and cannot be reached from any production machine. + +### SSH config on backup server + +I use ~/.ssh/config to specify how to reach the various hosts. + +In this example we are making an offsite copy and use portforwarding to reach the proxmox machines: +``` +Host * + ControlPath ~/.ssh/control-master-%r@%h:%p + ControlMaster auto + ControlPersist 3600 + Compression yes + +Host pve1 + Hostname some.host.com + Port 10001 + +Host pve2 + Hostname some.host.com + Port 10002 + +Host pve3 + Hostname some.host.com + Port 10003 +``` + +### Backup script + +I use the following backup script on the backup server. + +Adjust the variables HOSTS TARGET and NAME to your needs. + +```shell +#!/bin/bash + +HOSTS="pve1 pve2 pve3" +TARGET=rpool/pvebackups +NAME=prox + +zfs create -p $TARGET/data &>/dev/null +for HOST in $HOSTS; do + + echo "################################### RPOOL $HOST" + + # enable backup + ssh $HOST "zfs set autobackup:rpool_$NAME=child rpool/ROOT" + + #backup rpool to specific directory per host + zfs create -p $TARGET/rpools/$HOST &>/dev/null + zfs-autobackup --keep-source=1d1w,1w1m --ssh-source $HOST rpool_$NAME $TARGET/rpools/$HOST --clear-mountpoint --clear-refreservation --ignore-transfer-errors --strip-path 2 --verbose --no-holds $@ + + zabbix-job-status backup_$HOST""_rpool_$NAME daily $? >/dev/null 2>/dev/null + + + echo "################################### DATA $HOST" + + # enable backup + ssh $HOST "zfs set autobackup:data_$NAME=child rpool/data" + + #backup data filesystems to a common directory + zfs-autobackup --keep-source=1d1w,1w1m --ssh-source $HOST data_$NAME $TARGET/data --clear-mountpoint --clear-refreservation --ignore-transfer-errors --strip-path 2 --verbose --ignore-replicated --min-change 300000 --no-holds $@ + + zabbix-job-status backup_$HOST""_data_$NAME daily $? >/dev/null 2>/dev/null + +done +``` + +This script will also send the backup status to Zabbix. (if you've installed my zabbix-job-status script https://github.com/psy0rz/stuff/tree/master/zabbix-jobs) diff --git a/Home.md b/Home.md index 0ad44f9..8ecf8d1 100644 --- a/Home.md +++ b/Home.md @@ -47,22 +47,7 @@ Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: -Your identification has been saved in /root/.ssh/id_rsa. -Your public key has been saved in /root/.ssh/id_rsa.pub. -The key fingerprint is: -SHA256:McJhCxvaxvFhO/3e8Lf5gzSrlTWew7/bwrd2U2EHymE root@backup -The key's randomart image is: -+---[RSA 2048]----+ -| + = | -| + X * E . | -| . = B + o o . | -| . o + o o.| -| S o .oo| -| . + o= +| -| . ++==.| -| .+o**| -| .. +B@| -+----[SHA256]-----+ +... root@backup:~# ``` @@ -192,6 +177,10 @@ Once you've got the correct settings for your situation, you can just store the Or just create a script and run it manually when you need it. +### Monitoring + +Dont forget to monitor the results of your backups, look at [Monitoring](Monitoring) for more info. + ## Use as snapshot tool You can use zfs-autobackup to only make snapshots. @@ -276,7 +265,9 @@ After that you can rename the disk image from the temporary location to the loca * [Handling ZFS encryption](Encryption) * [Transfer buffering, compression and rate limiting.](Piping) * [Custom Pre- and post-snapshot commands](PrePost) - +* [Monitoring](Monitoring) +* [Proxmox Example](Example%20Proxmox.md) + ## Usage ```console @@ -357,102 +348,4 @@ optional arguments: Full manual at: https://github.com/psy0rz/zfs_autobackup ``` -## Monitoring with Zabbix-jobs -You can monitor backups by using my zabbix-jobs script. () - -Put this command directly after the zfs_backup command in your cronjob: - -```console -zabbix-job-status backup_smartos01_fs1 daily $? -``` - -This will update the zabbix server with the exit code and will also alert you if the job didn't run for more than 2 days. - -## Backup a proxmox cluster with HA replication - -Due to the nature of proxmox we had to make a few enhancements to zfs-autobackup. This will probably also benefit other systems that use their own replication in combination with zfs-autobackup. - -All data under rpool/data can be on multiple nodes of the cluster. The naming of those filesystem is unique over the whole cluster. Because of this we should backup rpool/data of all nodes to the same destination. This way we wont have duplicate backups of the filesystems that are replicated. Because of various options, you can even migrate hosts and zfs-autobackup will be fine. (and it will get the next backup from the new node automatically) - -In the example below we have 3 nodes, named pve1, pve2 and pve3. - -### Preparing the proxmox nodes - -No preparation is needed, the script will take care of everything. You only need to setup the ssh keys, so that the backup server can access the proxmox server. - -TIP: make sure your backup server is firewalled and cannot be reached from any production machine. - -### SSH config on backup server - -I use ~/.ssh/config to specify how to reach the various hosts. - -In this example we are making an offsite copy and use portforwarding to reach the proxmox machines: -``` -Host * - ControlPath ~/.ssh/control-master-%r@%h:%p - ControlMaster auto - ControlPersist 3600 - Compression yes - -Host pve1 - Hostname some.host.com - Port 10001 - -Host pve2 - Hostname some.host.com - Port 10002 - -Host pve3 - Hostname some.host.com - Port 10003 -``` - -### Backup script - -I use the following backup script on the backup server. - -Adjust the variables HOSTS TARGET and NAME to your needs. - -```shell -#!/bin/bash - -HOSTS="pve1 pve2 pve3" -TARGET=rpool/pvebackups -NAME=prox - -zfs create -p $TARGET/data &>/dev/null -for HOST in $HOSTS; do - - echo "################################### RPOOL $HOST" - - # enable backup - ssh $HOST "zfs set autobackup:rpool_$NAME=child rpool/ROOT" - - #backup rpool to specific directory per host - zfs create -p $TARGET/rpools/$HOST &>/dev/null - zfs-autobackup --keep-source=1d1w,1w1m --ssh-source $HOST rpool_$NAME $TARGET/rpools/$HOST --clear-mountpoint --clear-refreservation --ignore-transfer-errors --strip-path 2 --verbose --no-holds $@ - - zabbix-job-status backup_$HOST""_rpool_$NAME daily $? >/dev/null 2>/dev/null - - - echo "################################### DATA $HOST" - - # enable backup - ssh $HOST "zfs set autobackup:data_$NAME=child rpool/data" - - #backup data filesystems to a common directory - zfs-autobackup --keep-source=1d1w,1w1m --ssh-source $HOST data_$NAME $TARGET/data --clear-mountpoint --clear-refreservation --ignore-transfer-errors --strip-path 2 --verbose --ignore-replicated --min-change 300000 --no-holds $@ - - zabbix-job-status backup_$HOST""_data_$NAME daily $? >/dev/null 2>/dev/null - -done -``` - -This script will also send the backup status to Zabbix. (if you've installed my zabbix-job-status script https://github.com/psy0rz/stuff/tree/master/zabbix-jobs) - -# Sponsor list - -This project was sponsorred by: - -* JetBrains (Provided me with a license for their whole professional product line, https://www.jetbrains.com/pycharm/ ) diff --git a/Monitoring.md b/Monitoring.md index c9310c3..9a2b64a 100644 --- a/Monitoring.md +++ b/Monitoring.md @@ -10,8 +10,9 @@ On completion, zfs-autobackup returns an exit code: Without `--verbose` or `--debug`, zfs-autobackup only echos (zfs) errors and warnings to stderr. Complete silence means everything is fine. -If it detects a tty it will output progress updates to stdout. +So if you stick it in a crontab it should only mail you if something is wrong. +If it detects a tty it will output progress updates to stdout. ## Monitoring example with Zabbix-jobs diff --git a/_Footer.md b/_Footer.md new file mode 100644 index 0000000..ee9be23 --- /dev/null +++ b/_Footer.md @@ -0,0 +1 @@ +Sponsorred by: JetBrains (Provided me with a license for their whole professional product line, https://www.jetbrains.com/pycharm/ )