Compare commits

..

44 Commits

Author SHA1 Message Date
c5363a1538 pre/post cmd tests 2021-06-22 12:43:52 +02:00
119225ba5b bump 2021-06-18 17:27:41 +02:00
84437ee1d0 pre/post snapshot polishing. still needs test-scripts 2021-06-18 17:16:18 +02:00
1286bfafd0 Merge pull request #80 from tuffnatty/pre-post-snapshot-cmd
Add support for pre- and post-snapshot scripts (#39)
2021-06-17 11:48:09 +02:00
9fc2703638 Always call post-snapshot-cmd to clean up faster on snapshot failure 2021-06-17 12:38:16 +03:00
01dc65af96 Merge pull request #81 from tuffnatty/another-typo
Another typo fix
2021-06-17 11:21:08 +02:00
082153e0ce A more robust MySQL example 2021-06-17 09:52:18 +03:00
77f5474447 Fix tests. 2021-06-16 23:37:38 +03:00
55ff14f1d8 Allow multiple --pre/post-snapshot-cmd options. Add a usage example. 2021-06-16 23:19:29 +03:00
2acd26b304 Fix a typo 2021-06-16 19:52:06 +03:00
ec9459c1d2 Use shlex.split() for --pre-snapshot-cmd and --post-snapshot-cmd 2021-06-16 19:35:41 +03:00
233fd83ded Merge pull request #79 from tuffnatty/typos
Fix a couple of typos
2021-06-16 16:46:10 +02:00
37c24e092c Merge pull request #78 from tuffnatty/patch-1
Typo fix in argument name in warning message
2021-06-16 16:45:42 +02:00
b2bf11382c Add --pre-snapshot-cmd and --post-snapshot-cmd options 2021-06-16 16:12:20 +03:00
19b918044e Fix a couple of typos 2021-06-16 13:19:47 +03:00
67d9240e7b Typo fix in argument name in warning message 2021-06-16 00:32:31 +03:00
1a5e4a9cdd doc 2021-05-31 22:33:44 +02:00
31f8c359ff update version number 2021-05-31 22:19:28 +02:00
b50b7b7563 test 2021-05-31 22:10:56 +02:00
37f91e1e08 no longer use zfs send --compressed as default. uses --zfs-compressed to reenable it. fixes #77 . 2021-05-31 22:02:31 +02:00
a2f3aee5b1 test and fix resume edge-case 2021-05-26 23:06:19 +02:00
75d0a3cc7e rc1 2021-05-26 20:15:05 +02:00
98c55e2aa8 test fix 2021-05-26 18:07:17 +02:00
d478e22111 test rate limit 2021-05-26 18:04:05 +02:00
3a4953fbc5 doc 2021-05-26 17:57:38 +02:00
8d4e041a9c add mbuffer 2021-05-26 17:52:10 +02:00
8725d56bc9 also add buffer on receving side 2021-05-26 17:38:05 +02:00
ab0bfdbf4e update docs 2021-05-19 00:52:56 +02:00
ea9012e476 beta6 2021-05-18 23:42:13 +02:00
97e3c110b3 added bandwidth throttling. fixes #51 2021-05-18 19:56:33 +02:00
9264e8de6d more warnings 2021-05-18 19:36:33 +02:00
830ccf1bd4 added warnings in yellow 2021-05-18 19:22:46 +02:00
a389e4c81c fix 2021-05-18 18:18:54 +02:00
36a66fbafc fix 2021-05-18 18:10:34 +02:00
b70c9986c7 regression tests for all compressors 2021-05-18 18:04:47 +02:00
664ea32c96 doc 2021-05-15 16:18:34 +02:00
30f30babea added compression, fixes #40 2021-05-15 16:18:02 +02:00
5e04aabf37 show pipes in verbose 2021-05-15 12:34:21 +02:00
59d53e9664 --recv-pipe and --send-pipe implemented. Added CmdItem to make CmdPipe more consitent 2021-05-11 00:59:26 +02:00
171f0ac5ad as final step we now can do system piping. fixes #50 2021-05-09 14:03:57 +02:00
0ce3bf1297 python 2 compat 2021-05-09 13:04:22 +02:00
c682665888 python 2 compat 2021-05-09 11:09:55 +02:00
086cfe570b run everything in either local shell (shell=true), or remote shell (ssh). this it to allow external shell piping 2021-05-09 10:56:30 +02:00
521d1078bd working on send pipe 2021-05-03 20:25:49 +02:00
18 changed files with 764 additions and 254 deletions

View File

@ -17,7 +17,7 @@ jobs:
- name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
run: sudo apt update && sudo apt install zfsutils-linux lzop pigz zstd gzip xz-utils lz4 mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
- name: Regression test
@ -39,7 +39,7 @@ jobs:
- name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux python3-setuptools && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
run: sudo apt update && sudo apt install zfsutils-linux python3-setuptools lzop pigz zstd gzip xz-utils liblz4-tool mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
- name: Regression test
@ -64,7 +64,7 @@ jobs:
python-version: '2.x'
- name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux python-setuptools && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls colorama
run: sudo apt update && sudo apt install zfsutils-linux python-setuptools lzop pigz zstd gzip xz-utils liblz4-tool mbuffer && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls colorama
- name: Regression test
run: sudo -E ./tests/run_tests

179
README.md
View File

@ -5,7 +5,7 @@
## Introduction
This is a tool I wrote to make replicating ZFS datasets easy and reliable.
ZFS-autobackup tries to be the most reliable and easiest to use tool, while having all the features.
You can either use it as a **backup** tool, **replication** tool or **snapshot** tool.
@ -13,12 +13,10 @@ You can select what to backup by setting a custom `ZFS property`. This makes it
Other settings are just specified on the commandline: Simply setup and test your zfs-autobackup command and fix all the issues you might encounter. When you're done you can just copy/paste your command to a cron or script.
Since its using ZFS commands, you can see what its actually doing by specifying `--debug`. This also helps a lot if you run into some strange problem or error. You can just copy-paste the command that fails and play around with it on the commandline. (something I missed in other tools)
Since its using ZFS commands, you can see what it's actually doing by specifying `--debug`. This also helps a lot if you run into some strange problem or error. You can just copy-paste the command that fails and play around with it on the commandline. (something I missed in other tools)
An important feature thats missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your system.
zfs-autobackup tries to be the easiest to use backup tool for zfs, with the most features.
## Features
* Works across operating systems: Tested with **Linux**, **FreeBSD/FreeNAS** and **SmartOS**.
@ -34,6 +32,9 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs, with the most
* Can be scheduled via a simple cronjob or run directly from commandline.
* Supports resuming of interrupted transfers.
* ZFS encryption support: Can decrypt / encrypt or even re-encrypt datasets during transfer.
* Supports sending with compression. (Using pigz, zstd etc)
* IO buffering to speed up transfer.
* Bandwidth rate limiting.
* Multiple backups from and to the same datasets are no problem.
* Creates the snapshot before doing anything else. (assuring you at least have a snapshot if all else fails)
* Checks everything but tries continue on non-fatal errors when possible. (Reports error-count when done)
@ -44,10 +45,11 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs, with the most
* Automatic resuming of failed transfers.
* Can continue from existing common snapshots. (e.g. easy migration)
* Gracefully handles datasets that no longer exist on source.
* Support for ZFS sending/receiving through custom pipes.
* Easy installation:
* Just install zfs-autobackup via pip, or download it manually.
* Just install zfs-autobackup via pip.
* Only needs to be installed on one side.
* Written in python and uses zfs-commands, no 3rd party dependency's or libraries needed.
* Written in python and uses zfs-commands, no special 3rd party dependency's or compiled libraries needed.
* No separate config files or properties. Just one zfs-autobackup command you can copy/paste in your backup script.
## Installation
@ -64,6 +66,8 @@ The recommended way on most servers is to use [pip](https://pypi.org/project/zfs
This can also be used to upgrade zfs-autobackup to the newest stable version.
To install the latest beta version add the `--pre` option.
### Using easy_install
On older servers you might have to use easy_install
@ -394,6 +398,47 @@ Note 2: Decide what you want at an early stage: If you change the --encrypt or -
I'll add some tips when the issues start to get in on github. :)
## Transfer buffering, compression and rate limiting.
If you're transferring over a slow link it might be useful to use `--compress=zstd-fast`. This will compress the data before sending, so it uses less bandwidth. An alternative to this is to use --zfs-compressed: This will transfer blocks that already have compression intact. (--compress will usually compress much better but uses much more resources. --zfs-compressed uses the least resources, but can be a disadvantage if you want to use a different compression method on the target.)
You can also limit the datarate by using the `--rate` option.
The `--buffer` option might also help since it acts as an IO buffer: zfs send can vary wildly between completely idle and huge bursts of data. When zfs send is idle, the buffer will continue transferring data over the slow link.
It's also possible to add custom send or receive pipes with `--send-pipe` and `--recv-pipe`.
These options all work together and the buffer on the receiving side is only added if appropriate. When all options are active:
#### On the sending side:
zfs send -> send buffer -> custom send pipes -> compression -> transfer rate limiter
#### On the receiving side:
decompression -> custom recv pipes -> buffer -> zfs recv
## Running custom commands before and after snapshotting
You can run commands before and after the snapshot to freeze databases to make the on for example to make the on-disk data consistent before snapshotting.
The commands will be executed on the source side. Use the `--pre-snapshot-cmd` and `--post-snapshot-cmd` options for this.
For example:
```sh
zfs-autobackup \
--pre-snapshot-cmd 'daemon -f jexec mysqljail1 mysql -s -e "set autocommit=0;flush logs;flush tables with read lock;\\! echo \$\$ > /tmp/mysql_lock.pid && sleep 60"' \
--pre-snapshot-cmd 'daemon -f jexec mysqljail2 mysql -s -e "set autocommit=0;flush logs;flush tables with read lock;\\! echo \$\$ > /tmp/mysql_lock.pid && sleep 60"' \
--post-snapshot-cmd 'pkill -F /jails/mysqljail1/tmp/mysql_lock.pid' \
--post-snapshot-cmd 'pkill -F /jails/mysqljail2/tmp/mysql_lock.pid' \
backupfs1
```
Failure handling during pre/post commands:
* If a pre-command fails, zfs-autobackup will exit with an error. (after executing the post-commands)
* All post-commands are always executed. Even if the pre-commands or actual snapshot have failed. This way you can be sure that stuff is always cleanedup and unfreezed.
## Tips
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
@ -406,6 +451,8 @@ I'll add some tips when the issues start to get in on github. :)
If you have a large number of datasets its important to keep the following tips in mind.
Also it might help to use the --buffer option to add IO buffering during the data transfer. This might speed up things since it smooths out sudden IO bursts that are frequent during a zfs send or recv.
#### Some statistics
To get some idea of how fast zfs-autobackup is, I did some test on my laptop, with a SKHynix_HFS512GD9TNI-L2B0B disk. I'm using zfs 2.0.2.
@ -466,17 +513,32 @@ Look in man ssh_config for many more options.
(NOTE: Quite a lot has changed since the current stable version 3.0. The page your are viewing is for upcoming version 3.1 which is still in beta.)
```console
usage: zfs-autobackup [-h] [--ssh-config CONFIG-FILE] [--ssh-source USER@HOST] [--ssh-target USER@HOST] [--keep-source SCHEDULE] [--keep-target SCHEDULE] [--other-snapshots] [--no-snapshot] [--no-send]
[--no-thinning] [--no-holds] [--min-change BYTES] [--allow-empty] [--ignore-replicated] [--strip-path N] [--clear-refreservation] [--clear-mountpoint] [--filter-properties PROPERY,...]
[--set-properties PROPERTY=VALUE,...] [--rollback] [--destroy-incompatible] [--destroy-missing SCHEDULE] [--ignore-transfer-errors] [--decrypt] [--encrypt] [--test] [--verbose] [--debug]
[--debug-output] [--progress] [--send-pipe COMMAND] [--recv-pipe COMMAND]
backup-name [target-path]
usage: zfs-autobackup [-h] [--ssh-config CONFIG-FILE] [--ssh-source USER@HOST]
[--ssh-target USER@HOST] [--keep-source SCHEDULE]
[--keep-target SCHEDULE] [--pre-snapshot-cmd COMMAND]
[--post-snapshot-cmd COMMAND] [--other-snapshots]
[--no-snapshot] [--no-send] [--no-thinning] [--no-holds]
[--min-change BYTES] [--allow-empty]
[--ignore-replicated] [--strip-path N]
[--clear-refreservation] [--clear-mountpoint]
[--filter-properties PROPERY,...]
[--set-properties PROPERTY=VALUE,...] [--rollback]
[--destroy-incompatible] [--destroy-missing SCHEDULE]
[--ignore-transfer-errors] [--decrypt] [--encrypt]
[--test] [--verbose] [--debug] [--debug-output]
[--progress] [--send-pipe COMMAND] [--recv-pipe COMMAND]
[--compress TYPE] [--rate DATARATE] [--buffer SIZE]
backup-name [target-path]
zfs-autobackup v3.1-beta3 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
zfs-autobackup v3.1-beta6 - (c)2021 E.H.Eefting (edwin@datux.nl)
positional arguments:
backup-name Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup
target-path Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate as snapshot-tool on source)
backup-name Name of the backup (you should set the zfs property
"autobackup:backup-name" to true on filesystems you
want to backup
target-path Target ZFS filesystem (optional: if not specified,
zfs-autobackup will only operate as snapshot-tool on
source)
optional arguments:
-h, --help show this help message and exit
@ -487,41 +549,84 @@ optional arguments:
--ssh-target USER@HOST
Target host to push backup to.
--keep-source SCHEDULE
Thinning schedule for old source snapshots. Default: 10,1d1w,1w1m,1m1y
Thinning schedule for old source snapshots. Default:
10,1d1w,1w1m,1m1y
--keep-target SCHEDULE
Thinning schedule for old target snapshots. Default: 10,1d1w,1w1m,1m1y
--other-snapshots Send over other snapshots as well, not just the ones created by this tool.
--no-snapshot Don't create new snapshots (useful for finishing uncompleted backups, or cleanups)
--no-send Don't send snapshots (useful for cleanups, or if you want a serperate send-cronjob)
Thinning schedule for old target snapshots. Default:
10,1d1w,1w1m,1m1y
--pre-snapshot-cmd COMMAND
Run COMMAND before snapshotting (can be used multiple
times.
--post-snapshot-cmd COMMAND
Run COMMAND after snapshotting (can be used multiple
times.
--other-snapshots Send over other snapshots as well, not just the ones
created by this tool.
--no-snapshot Don't create new snapshots (useful for finishing
uncompleted backups, or cleanups)
--no-send Don't send snapshots (useful for cleanups, or if you
want a serperate send-cronjob)
--no-thinning Do not destroy any snapshots.
--no-holds Don't hold snapshots. (Faster. Allows you to destroy common snapshot.)
--min-change BYTES Number of bytes written after which we consider a dataset changed (default 1)
--allow-empty If nothing has changed, still create empty snapshots. (same as --min-change=0)
--ignore-replicated Ignore datasets that seem to be replicated some other way. (No changes since lastest snapshot. Useful for proxmox HA replication)
--strip-path N Number of directories to strip from target path (use 1 when cloning zones between 2 SmartOS machines)
--no-holds Don't hold snapshots. (Faster. Allows you to destroy
common snapshot.)
--min-change BYTES Number of bytes written after which we consider a
dataset changed (default 1)
--allow-empty If nothing has changed, still create empty snapshots.
(same as --min-change=0)
--ignore-replicated Ignore datasets that seem to be replicated some other
way. (No changes since lastest snapshot. Useful for
proxmox HA replication)
--strip-path N Number of directories to strip from target path (use 1
when cloning zones between 2 SmartOS machines)
--clear-refreservation
Filter "refreservation" property. (recommended, safes space. same as --filter-properties refreservation)
--clear-mountpoint Set property canmount=noauto for new datasets. (recommended, prevents mount conflicts. same as --set-properties canmount=noauto)
Filter "refreservation" property. (recommended, safes
space. same as --filter-properties refreservation)
--clear-mountpoint Set property canmount=noauto for new datasets.
(recommended, prevents mount conflicts. same as --set-
properties canmount=noauto)
--filter-properties PROPERY,...
List of properties to "filter" when receiving filesystems. (you can still restore them with zfs inherit -S)
List of properties to "filter" when receiving
filesystems. (you can still restore them with zfs
inherit -S)
--set-properties PROPERTY=VALUE,...
List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S)
--rollback Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on)
List of propererties to override when receiving
filesystems. (you can still restore them with zfs
inherit -S)
--rollback Rollback changes to the latest target snapshot before
starting. (normally you can prevent changes by setting
the readonly property on the target_path to on)
--destroy-incompatible
Destroy incompatible snapshots on target. Use with care! (implies --rollback)
Destroy incompatible snapshots on target. Use with
care! (implies --rollback)
--destroy-missing SCHEDULE
Destroy datasets on target that are missing on the source. Specify the time since the last snapshot, e.g: --destroy-missing 30d
Destroy datasets on target that are missing on the
source. Specify the time since the last snapshot, e.g:
--destroy-missing 30d
--ignore-transfer-errors
Ignore transfer errors (still checks if received filesystem exists. useful for acltype errors)
Ignore transfer errors (still checks if received
filesystem exists. useful for acltype errors)
--decrypt Decrypt data before sending it over.
--encrypt Encrypt data after receiving it.
--test dont change anything, just show what would be done (still does all read-only operations)
--test dont change anything, just show what would be done
(still does all read-only operations)
--verbose verbose output
--debug Show zfs commands that are executed, stops after an exception.
--debug Show zfs commands that are executed, stops after an
exception.
--debug-output Show zfs commands and their output/exit codes. (noisy)
--progress show zfs progress output. Enabled automaticly on ttys. (use --no-progress to disable)
--send-pipe COMMAND pipe zfs send output through COMMAND
--recv-pipe COMMAND pipe zfs recv input through COMMAND
--progress show zfs progress output. Enabled automaticly on ttys.
(use --no-progress to disable)
--send-pipe COMMAND pipe zfs send output through COMMAND (can be used
multiple times)
--recv-pipe COMMAND pipe zfs recv input through COMMAND (can be used
multiple times)
--compress TYPE Use compression during transfer, zstd-fast
recommended. (zstd-slow, xz, pigz-fast, lz4, pigz-
slow, zstd-fast, gzip, lzo)
--rate DATARATE Limit data transfer rate (e.g. 128K. requires
mbuffer.)
--buffer SIZE Add zfs send and recv buffers to smooth out IO bursts.
(e.g. 128M. requires mbuffer)
Full manual at: https://github.com/psy0rz/zfs_autobackup
```

View File

@ -1,4 +1,6 @@
# To run tests as non-root, use this hack:
# chmod 4755 /usr/sbin/zpool /usr/sbin/zfs
import subprocess
import random

View File

@ -1,5 +1,5 @@
from basetest import *
from zfs_autobackup.CmdPipe import CmdPipe
from zfs_autobackup.CmdPipe import CmdPipe,CmdItem
class TestCmdPipe(unittest2.TestCase):
@ -9,24 +9,24 @@ class TestCmdPipe(unittest2.TestCase):
p=CmdPipe(readonly=False, inp=None)
err=[]
out=[]
p.add(["ls", "-d", "/", "/", "/nonexistent"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2))
p.add(CmdItem(["ls", "-d", "/", "/", "/nonexistent"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
executed=p.execute(stdout_handler=lambda line: out.append(line))
self.assertEqual(err, ["ls: cannot access '/nonexistent': No such file or directory"])
self.assertEqual(out, ["/","/"])
self.assertTrue(executed)
self.assertIsNone(executed)
def test_input(self):
"""test stdinput"""
p=CmdPipe(readonly=False, inp="test")
err=[]
out=[]
p.add(["echo", "test"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0))
p.add(CmdItem(["cat"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
executed=p.execute(stdout_handler=lambda line: out.append(line))
self.assertEqual(err, [])
self.assertEqual(out, ["test"])
self.assertTrue(executed)
self.assertIsNone(executed)
def test_pipe(self):
"""test piped"""
@ -35,16 +35,16 @@ class TestCmdPipe(unittest2.TestCase):
err2=[]
err3=[]
out=[]
p.add(["echo", "test"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0))
p.add(["tr", "e", "E"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0))
p.add(["tr", "t", "T"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0))
p.add(CmdItem(["echo", "test"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
p.add(CmdItem(["tr", "e", "E"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
p.add(CmdItem(["tr", "t", "T"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
executed=p.execute(stdout_handler=lambda line: out.append(line))
self.assertEqual(err1, [])
self.assertEqual(err2, [])
self.assertEqual(err3, [])
self.assertEqual(out, ["TEsT"])
self.assertTrue(executed)
self.assertIsNone(executed)
#test str representation as well
self.assertEqual(str(p), "(echo test) | (tr e E) | (tr t T)")
@ -56,16 +56,16 @@ class TestCmdPipe(unittest2.TestCase):
err2=[]
err3=[]
out=[]
p.add(["ls", "/nonexistent1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2))
p.add(["ls", "/nonexistent2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2))
p.add(["ls", "/nonexistent3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2))
p.add(CmdItem(["ls", "/nonexistent1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["ls", "/nonexistent2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["ls", "/nonexistent3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
executed=p.execute(stdout_handler=lambda line: out.append(line))
self.assertEqual(err1, ["ls: cannot access '/nonexistent1': No such file or directory"])
self.assertEqual(err2, ["ls: cannot access '/nonexistent2': No such file or directory"])
self.assertEqual(err3, ["ls: cannot access '/nonexistent3': No such file or directory"])
self.assertEqual(out, [])
self.assertTrue(executed)
self.assertIsNone(executed)
def test_exitcode(self):
"""test piped exitcodes """
@ -74,16 +74,16 @@ class TestCmdPipe(unittest2.TestCase):
err2=[]
err3=[]
out=[]
p.add(["bash", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1))
p.add(["bash", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2))
p.add(["bash", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3))
p.add(CmdItem(["bash", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1)))
p.add(CmdItem(["bash", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["bash", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3)))
executed=p.execute(stdout_handler=lambda line: out.append(line))
self.assertEqual(err1, [])
self.assertEqual(err2, [])
self.assertEqual(err3, [])
self.assertEqual(out, [])
self.assertTrue(executed)
self.assertIsNone(executed)
def test_readonly_execute(self):
"""everything readonly, just should execute"""
@ -92,16 +92,18 @@ class TestCmdPipe(unittest2.TestCase):
err1=[]
err2=[]
out=[]
p.add(["echo", "test1"], stderr_handler=lambda line: err1.append(line), readonly=True)
p.add(["echo", "test2"], stderr_handler=lambda line: err2.append(line), readonly=True)
def true_exit(exit_code):
return True
p.add(CmdItem(["echo", "test1"], stderr_handler=lambda line: err1.append(line), exit_handler=true_exit, readonly=True))
p.add(CmdItem(["echo", "test2"], stderr_handler=lambda line: err2.append(line), exit_handler=true_exit, readonly=True))
executed=p.execute(stdout_handler=lambda line: out.append(line))
self.assertEqual(err1, [])
self.assertEqual(err2, [])
self.assertEqual(out, ["test2"])
self.assertTrue(executed)
self.assertEqual(p.items[0]['process'].returncode,0)
self.assertEqual(p.items[1]['process'].returncode,0)
def test_readonly_skip(self):
"""one command not readonly, skip"""
@ -110,12 +112,12 @@ class TestCmdPipe(unittest2.TestCase):
err1=[]
err2=[]
out=[]
p.add(["echo", "test1"], stderr_handler=lambda line: err1.append(line), readonly=False)
p.add(["echo", "test2"], stderr_handler=lambda line: err2.append(line), readonly=True)
p.add(CmdItem(["echo", "test1"], stderr_handler=lambda line: err1.append(line), readonly=False))
p.add(CmdItem(["echo", "test2"], stderr_handler=lambda line: err2.append(line), readonly=True))
executed=p.execute(stdout_handler=lambda line: out.append(line))
self.assertEqual(err1, [])
self.assertEqual(err2, [])
self.assertEqual(out, [])
self.assertFalse(executed)
self.assertTrue(executed)

View File

@ -26,9 +26,9 @@ class TestExecuteNode(unittest2.TestCase):
with self.subTest("multiline tabsplit"):
self.assertEqual(node.run(["echo","l1c1\tl1c2\nl2c1\tl2c2"], tab_split=True), [['l1c1', 'l1c2'], ['l2c1', 'l2c2']])
#escaping test (shouldnt be a problem locally, single quotes can be a problem remote via ssh)
#escaping test
with self.subTest("escape test"):
s="><`'\"@&$()$bla\\/.*!#test _+-={}[]|"
s="><`'\"@&$()$bla\\/.* !#test _+-={}[]|${bla} $bla"
self.assertEqual(node.run(["echo",s]), [s])
#return std err as well, trigger stderr by listing something non existing
@ -51,6 +51,15 @@ class TestExecuteNode(unittest2.TestCase):
with self.subTest("stdin process with inp=None (shouldn't hang)"):
self.assertEqual(node.run(["cat"]), [])
# let the system do the piping with an unescaped |:
with self.subTest("system piping test"):
#first make sure the actual | character is still properly escaped:
self.assertEqual(node.run(["echo","|"]), ["|"])
#now pipe
self.assertEqual(node.run(["echo", "abc", node.PIPE, "tr", "a", "A" ]), ["Abc"])
def test_basics_local(self):
node=ExecuteNode(debug_output=True)
self.basics(node)

View File

@ -227,11 +227,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# incremental, doesnt want previous anymore
with patch('time.strftime', return_value="20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-target=0 --debug --allow-empty".split(" ")).run())
"test test_target1 --no-progress --verbose --keep-target=0 --allow-empty".split(" ")).run())
print(buf.getvalue())
self.assertIn(": aborting resume, since", buf.getvalue())
self.assertIn("Aborting resume, we dont want that snapshot anymore.", buf.getvalue())
r = shelltest("zfs list -H -o name -r -t all test_target1")
self.assertMultiLineEqual(r, """
@ -247,6 +247,34 @@ test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000002
""")
# test with empty snapshot list (this was a bug)
def test_abort_resume_emptysnapshotlist(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
# generate resume
with patch('time.strftime', return_value="20101111000001"):
self.generate_resume()
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
with OutputIO() as buf:
with redirect_stdout(buf):
# incremental, doesnt want previous anymore
with patch('time.strftime', return_value="20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --no-snapshot".split(
" ")).run())
print(buf.getvalue())
self.assertIn("Aborting resume, its obsolete", buf.getvalue())
def test_missing_common(self):
with patch('time.strftime', return_value="20101111000000"):

View File

@ -0,0 +1,88 @@
import zfs_autobackup.compressors
from basetest import *
import time
class TestSendRecvPipes(unittest2.TestCase):
"""test input/output pipes for zfs send and recv"""
def setUp(self):
prepare_zpools()
self.longMessage=True
def test_send_basics(self):
"""send basics (remote/local send pipe)"""
with self.subTest("local local pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote local pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("local remote pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote remote pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
def test_compress(self):
"""send basics (remote/local send pipe)"""
for compress in zfs_autobackup.compressors.COMPRESS_CMDS.keys():
with self.subTest("compress "+compress):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--compress="+compress]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
def test_buffer(self):
"""test different buffer configurations"""
with self.subTest("local local pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--buffer=1M" ]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote local pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--buffer=1M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("local remote pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-target=localhost", "--buffer=1M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote remote pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--ssh-target=localhost", "--buffer=1M"]).run())
def test_rate(self):
"""test rate limit"""
start=time.time()
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--rate=50k" ]).run())
#not a great way of verifying but it works.
self.assertGreater(time.time()-start, 5)

View File

@ -890,7 +890,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
n=ZfsNode("test",l)
d=ZfsDataset(n,"test_source1@test")
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, output_pipes=[], send_properties=True, write_embedded=True)
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True, zfs_compressed=True)
with OutputIO() as buf:

View File

@ -2,6 +2,7 @@ from basetest import *
import time
class TestZfsAutobackup31(unittest2.TestCase):
"""various new 3.1 features"""
def setUp(self):
prepare_zpools()
@ -70,4 +71,11 @@ test_target1/b/test_source1/fs1/sub@test-20101111000000
test_target1/b/test_source2/fs2/sub@test-20101111000000
test_target1/b/test_target1/a/test_source1/fs1@test-20101111000000
test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
""")
""")
def test_zfs_compressed(self):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --zfs-compressed".split(" ")).run())

View File

@ -1,6 +1,6 @@
from basetest import *
from zfs_autobackup.LogStub import LogStub
from zfs_autobackup.ExecuteNode import ExecuteError
class TestZfsNode(unittest2.TestCase):
@ -9,16 +9,15 @@ class TestZfsNode(unittest2.TestCase):
prepare_zpools()
# return super().setUp()
def test_consistent_snapshot(self):
logger=LogStub()
description="[Source]"
node=ZfsNode("test", logger, description=description)
logger = LogStub()
description = "[Source]"
node = ZfsNode("test", logger, description=description)
with self.subTest("first snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",100000)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertEqual(r,"""
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1", 100000)
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r, """
test_source1
test_source1/fs1
test_source1/fs1@test-1
@ -33,11 +32,10 @@ test_source2/fs3/sub
test_target1
""")
with self.subTest("second snapshot, no changes, no snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2",1)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertEqual(r,"""
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2", 1)
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r, """
test_source1
test_source1/fs1
test_source1/fs1@test-1
@ -53,9 +51,9 @@ test_target1
""")
with self.subTest("second snapshot, no changes, empty snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2",0)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertEqual(r,"""
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2", 0)
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r, """
test_source1
test_source1/fs1
test_source1/fs1@test-1
@ -73,31 +71,82 @@ test_source2/fs3/sub
test_target1
""")
def test_consistent_snapshot_prepostcmds(self):
logger = LogStub()
description = "[Source]"
node = ZfsNode("test", logger, description=description, debug_output=True)
with self.subTest("Test if all cmds are executed correctly (no failures)"):
with OutputIO() as buf:
with redirect_stdout(buf):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "echo pre2"],
post_snapshot_cmds=["echo post1 >&2", "echo post2 >&2"]
)
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
with self.subTest("Failure in the middle, only pre1 and both post1 and post2 should be executed, no snapshot should be attempted"):
with OutputIO() as buf:
with redirect_stdout(buf):
with self.assertRaises(ExecuteError):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "false", "echo pre2"],
post_snapshot_cmds=["echo post1", "false", "echo post2"]
)
print(buf.getvalue())
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertNotIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
with self.subTest("Snapshot fails"):
with OutputIO() as buf:
with redirect_stdout(buf):
with self.assertRaises(ExecuteError):
#same snapshot name as before so it fails
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "echo pre2"],
post_snapshot_cmds=["echo post1", "echo post2"]
)
print(buf.getvalue())
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
def test_getselected(self):
logger=LogStub()
description="[Source]"
node=ZfsNode("test", logger, description=description)
s=pformat(node.selected_datasets(exclude_paths=[], exclude_received=False))
logger = LogStub()
description = "[Source]"
node = ZfsNode("test", logger, description=description)
s = pformat(node.selected_datasets(exclude_paths=[], exclude_received=False))
print(s)
#basics
self.assertEqual (s, """[(local): test_source1/fs1,
# basics
self.assertEqual(s, """[(local): test_source1/fs1,
(local): test_source1/fs1/sub,
(local): test_source2/fs2/sub]""")
#caching, so expect same result after changing it
# caching, so expect same result after changing it
subprocess.check_call("zfs set autobackup:test=true test_source2/fs3", shell=True)
self.assertEqual (s, """[(local): test_source1/fs1,
self.assertEqual(s, """[(local): test_source1/fs1,
(local): test_source1/fs1/sub,
(local): test_source2/fs2/sub]""")
def test_validcommand(self):
logger=LogStub()
description="[Source]"
node=ZfsNode("test", logger, description=description)
logger = LogStub()
description = "[Source]"
node = ZfsNode("test", logger, description=description)
with self.subTest("test invalid option"):
self.assertFalse(node.valid_command(["zfs", "send", "--invalid-option", "nonexisting"]))
@ -105,21 +154,19 @@ test_target1
self.assertTrue(node.valid_command(["zfs", "send", "-v", "nonexisting"]))
def test_supportedsendoptions(self):
logger=LogStub()
description="[Source]"
node=ZfsNode("test", logger, description=description)
logger = LogStub()
description = "[Source]"
node = ZfsNode("test", logger, description=description)
# -D propably always supported
self.assertGreater(len(node.supported_send_options),0)
self.assertGreater(len(node.supported_send_options), 0)
def test_supportedrecvoptions(self):
logger=LogStub()
description="[Source]"
#NOTE: this could hang via ssh if we dont close filehandles properly. (which was a previous bug)
node=ZfsNode("test", logger, description=description, ssh_to='localhost')
logger = LogStub()
description = "[Source]"
# NOTE: this could hang via ssh if we dont close filehandles properly. (which was a previous bug)
node = ZfsNode("test", logger, description=description, ssh_to='localhost')
self.assertIsInstance(node.supported_recv_options, list)
if __name__ == '__main__':
unittest.main()

View File

@ -2,6 +2,49 @@ import subprocess
import os
import select
try:
from shlex import quote as cmd_quote
except ImportError:
from pipes import quote as cmd_quote
class CmdItem:
"""one command item, to be added to a CmdPipe"""
def __init__(self, cmd, readonly=False, stderr_handler=None, exit_handler=None, shell=False):
"""create item. caller has to make sure cmd is properly escaped when using shell.
:type cmd: list of str
"""
self.cmd = cmd
self.readonly = readonly
self.stderr_handler = stderr_handler
self.exit_handler = exit_handler
self.shell = shell
self.process = None
def __str__(self):
"""return copy-pastable version of command."""
if self.shell:
# its already copy pastable for a shell:
return " ".join(self.cmd)
else:
# make it copy-pastable, will make a mess of quotes sometimes, but is correct
return " ".join(map(cmd_quote, self.cmd))
def create(self, stdin):
"""actually create the subprocess (called by CmdPipe)"""
# make sure the command gets all the data in utf8 format:
# (this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
encoded_cmd = []
for arg in self.cmd:
encoded_cmd.append(arg.encode('utf-8'))
self.process = subprocess.Popen(encoded_cmd, env=os.environ, stdout=subprocess.PIPE, stdin=stdin,
stderr=subprocess.PIPE, shell=self.shell)
class CmdPipe:
"""a pipe of one or more commands. also takes care of utf-8 encoding/decoding and line based parsing"""
@ -17,42 +60,35 @@ class CmdPipe:
self.readonly = readonly
self._should_execute = True
def add(self, cmd, readonly=False, stderr_handler=None, exit_handler=None):
"""adds a command to pipe"""
def add(self, cmd_item):
"""adds a CmdItem to pipe.
:type cmd_item: CmdItem
"""
self.items.append({
'cmd': cmd,
'stderr_handler': stderr_handler,
'exit_handler': exit_handler
})
self.items.append(cmd_item)
if not readonly and self.readonly:
if not cmd_item.readonly and self.readonly:
self._should_execute = False
def __str__(self):
"""transform into oneliner for debugging and testing """
"""transform whole pipe into oneliner for debugging and testing. this should generate a copy-pastable string for in a console """
#just one command?
if len(self.items)==1:
return " ".join(self.items[0]['cmd'])
#an actual pipe
ret = ""
for item in self.items:
if ret:
ret = ret + " | "
ret = ret + "(" + " ".join(item['cmd']) + ")"
ret = ret + "({})".format(item) # this will do proper escaping to make it copypastable
return ret
def should_execute(self):
return(self._should_execute)
return self._should_execute
def execute(self, stdout_handler):
"""run the pipe. returns True if it executed, and false if it skipped due to readonly conditions"""
"""run the pipe. returns True all exit handlers returned true"""
if not self._should_execute:
return False
return True
# first process should have actual user input as stdin:
selectors = []
@ -62,29 +98,21 @@ class CmdPipe:
stdin = subprocess.PIPE
for item in self.items:
# make sure the command gets all the data in utf8 format:
# (this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
encoded_cmd = []
for arg in item['cmd']:
encoded_cmd.append(arg.encode('utf-8'))
item['process'] = subprocess.Popen(encoded_cmd, env=os.environ, stdout=subprocess.PIPE, stdin=stdin,
stderr=subprocess.PIPE)
selectors.append(item['process'].stderr)
item.create(stdin)
selectors.append(item.process.stderr)
if last_stdout is None:
# we're the first process in the pipe, do we have some input?
if self.inp is not None:
# TODO: make streaming to support big inputs?
item['process'].stdin.write(self.inp.encode('utf-8'))
item['process'].stdin.close()
item.process.stdin.write(self.inp.encode('utf-8'))
item.process.stdin.close()
else:
#last stdout was piped to this stdin already, so close it because we dont need it anymore
# last stdout was piped to this stdin already, so close it because we dont need it anymore
last_stdout.close()
last_stdout = item['process'].stdout
stdin=last_stdout
last_stdout = item.process.stdout
stdin = last_stdout
# monitor last stdout as well
selectors.append(last_stdout)
@ -104,29 +132,29 @@ class CmdPipe:
eof_count = eof_count + 1
for item in self.items:
if item['process'].stderr in read_ready:
line = item['process'].stderr.readline().decode('utf-8').rstrip()
if item.process.stderr in read_ready:
line = item.process.stderr.readline().decode('utf-8').rstrip()
if line != "":
item['stderr_handler'](line)
item.stderr_handler(line)
else:
eof_count = eof_count + 1
if item['process'].poll() is not None:
if item.process.poll() is not None:
done_count = done_count + 1
# all filehandles are eof and all processes are done (poll() is not None)
if eof_count == len(selectors) and done_count == len(self.items):
break
#close filehandles
# close filehandles
last_stdout.close()
for item in self.items:
item['process'].stderr.close()
item.process.stderr.close()
#call exit handlers
# call exit handlers
success = True
for item in self.items:
if item['exit_handler'] is not None:
item['exit_handler'](item['process'].returncode)
if item.exit_handler is not None:
success=item.exit_handler(item.process.returncode) and success
return True
return success

View File

@ -1,16 +1,24 @@
import os
import select
import subprocess
from zfs_autobackup.CmdPipe import CmdPipe
from zfs_autobackup.CmdPipe import CmdPipe, CmdItem
from zfs_autobackup.LogStub import LogStub
try:
from shlex import quote as cmd_quote
except ImportError:
from pipes import quote as cmd_quote
class ExecuteError(Exception):
pass
class ExecuteNode(LogStub):
"""an endpoint to execute local or remote commands via ssh"""
PIPE=1
def __init__(self, ssh_config=None, ssh_to=None, readonly=False, debug_output=False):
"""ssh_config: custom ssh config
ssh_to: server you want to ssh to. none means local
@ -41,48 +49,43 @@ class ExecuteNode(LogStub):
else:
self.error("STDERR > " + line.rstrip())
# def _parse_stderr_pipe(self, line, hide_errors):
# """parse stderr from pipe input process. can be overridden in subclass"""
# if hide_errors:
# self.debug("STDERR|> " + line.rstrip())
# else:
# self.error("STDERR|> " + line.rstrip())
def _quote(self, cmd):
"""return quoted version of command. if it has value PIPE it will add an actual | """
if cmd==self.PIPE:
return('|')
else:
return(cmd_quote(cmd))
def _remote_cmd(self, cmd):
"""transforms cmd in correct form for remote over ssh, if needed"""
def _shell_cmd(self, cmd):
"""prefix specified ssh shell to command and escape shell characters"""
# use ssh?
if self.ssh_to is not None:
encoded_cmd = []
encoded_cmd.append("ssh")
ret=[]
#add remote shell
if not self.is_local():
ret=["ssh"]
if self.ssh_config is not None:
encoded_cmd.extend(["-F", self.ssh_config])
ret.extend(["-F", self.ssh_config])
encoded_cmd.append(self.ssh_to)
ret.append(self.ssh_to)
for arg in cmd:
# add single quotes for remote commands to support spaces and other weird stuff (remote commands are
# executed in a shell) and escape existing single quotes (bash needs ' to end the quoted string,
# then a \' for the actual quote and then another ' to start a new quoted string) (and then python
# needs the double \ to get a single \)
encoded_cmd.append(("'" + arg.replace("'", "'\\''") + "'"))
return encoded_cmd
else:
return(cmd)
ret.append(" ".join(map(self._quote, cmd)))
return ret
def is_local(self):
return self.ssh_to is None
def run(self, cmd, inp=None, tab_split=False, valid_exitcodes=None, readonly=False, hide_errors=False,
return_stderr=False, pipe=False):
"""run a command on the node , checks output and parses/handle output and returns it
Either uses a local shell (sh -c) or remote shell (ssh) to execute the command. Therefore the command can have stuff like actual pipes in it, if you dont want to use pipe=True to pipe stuff.
:param cmd: the actual command, should be a list, where the first item is the command
and the rest are parameters.
and the rest are parameters. use ExecuteNode.PIPE to add an unescaped |
(if you want to use system piping instead of python piping)
:param pipe: return CmdPipe instead of executing it.
:param inp: Can be None, a string or a CmdPipe that was previously returned.
:param tab_split: split tabbed files in output into a list
@ -96,13 +99,14 @@ class ExecuteNode(LogStub):
# create new pipe?
if not isinstance(inp, CmdPipe):
p = CmdPipe(self.readonly, inp)
cmd_pipe = CmdPipe(self.readonly, inp)
else:
# add stuff to existing pipe
p = inp
cmd_pipe = inp
# stderr parser
error_lines = []
def stderr_handler(line):
if tab_split:
error_lines.append(line.rstrip().split('\t'))
@ -119,18 +123,22 @@ class ExecuteNode(LogStub):
self.debug("EXIT > {}".format(exit_code))
if (valid_exitcodes != []) and (exit_code not in valid_exitcodes):
raise (ExecuteError("Command '{}' returned exit code {} (valid codes: {})".format(" ".join(cmd), exit_code, valid_exitcodes)))
self.error("Command \"{}\" returned exit code {} (valid codes: {})".format(cmd_item, exit_code, valid_exitcodes))
return False
# add command to pipe
encoded_cmd = self._remote_cmd(cmd)
p.add(cmd=encoded_cmd, readonly=readonly, stderr_handler=stderr_handler, exit_handler=exit_handler)
return True
# add shell command and handlers to pipe
cmd_item=CmdItem(cmd=self._shell_cmd(cmd), readonly=readonly, stderr_handler=stderr_handler, exit_handler=exit_handler, shell=self.is_local())
cmd_pipe.add(cmd_item)
# return pipe instead of executing?
if pipe:
return p
return cmd_pipe
# stdout parser
output_lines = []
def stdout_handler(line):
if tab_split:
output_lines.append(line.rstrip().split('\t'))
@ -138,13 +146,14 @@ class ExecuteNode(LogStub):
output_lines.append(line.rstrip())
self._parse_stdout(line)
if p.should_execute():
self.debug("CMD > {}".format(p))
if cmd_pipe.should_execute():
self.debug("CMD > {}".format(cmd_pipe))
else:
self.debug("CMDSKIP> {}".format(p))
self.debug("CMDSKIP> {}".format(cmd_pipe))
# execute and calls handlers in CmdPipe
p.execute(stdout_handler=stdout_handler)
if not cmd_pipe.execute(stdout_handler=stdout_handler):
raise(ExecuteError("Last command returned error"))
if return_stderr:
return output_lines, error_lines

View File

@ -31,6 +31,13 @@ class LogConsole:
print("! " + txt, file=sys.stderr)
sys.stderr.flush()
def warning(self, txt):
if self.colorama:
print(colorama.Fore.YELLOW + colorama.Style.BRIGHT + " NOTE: " + txt + colorama.Style.RESET_ALL)
else:
print(" NOTE: " + txt)
sys.stdout.flush()
def verbose(self, txt):
if self.show_verbose:
if self.colorama:

View File

@ -11,5 +11,8 @@ class LogStub:
def verbose(self, txt):
print("VERBOSE: " + txt)
def warning(self, txt):
print("WARNING: " + txt)
def error(self, txt):
print("ERROR : " + txt)

View File

@ -2,6 +2,8 @@ import argparse
import sys
import time
from zfs_autobackup import compressors
from zfs_autobackup.ExecuteNode import ExecuteNode
from zfs_autobackup.Thinner import Thinner
from zfs_autobackup.ZfsDataset import ZfsDataset
from zfs_autobackup.LogConsole import LogConsole
@ -9,10 +11,12 @@ from zfs_autobackup.ZfsNode import ZfsNode
from zfs_autobackup.ThinnerRule import ThinnerRule
class ZfsAutobackup:
"""main class"""
VERSION = "3.1-beta5"
VERSION = "3.1-rc4"
HEADER = "zfs-autobackup v{} - (c)2021 E.H.Eefting (edwin@datux.nl)".format(VERSION)
def __init__(self, argv, print_arguments=True):
@ -41,6 +45,10 @@ class ZfsAutobackup:
help='Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate '
'as snapshot-tool on source)')
parser.add_argument('--pre-snapshot-cmd', metavar="COMMAND", default=[], action='append',
help='Run COMMAND before snapshotting (can be used multiple times.')
parser.add_argument('--post-snapshot-cmd', metavar="COMMAND", default=[], action='append',
help='Run COMMAND after snapshotting (can be used multiple times.')
parser.add_argument('--other-snapshots', action='store_true',
help='Send over other snapshots as well, not just the ones created by this tool.')
parser.add_argument('--no-snapshot', action='store_true',
@ -71,7 +79,7 @@ class ZfsAutobackup:
parser.add_argument('--clear-mountpoint', action='store_true',
help='Set property canmount=noauto for new datasets. (recommended, prevents mount '
'conflicts. same as --set-properties canmount=noauto)')
parser.add_argument('--filter-properties', metavar='PROPERY,...', type=str,
parser.add_argument('--filter-properties', metavar='PROPERTY,...', type=str,
help='List of properties to "filter" when receiving filesystems. (you can still restore '
'them with zfs inherit -S)')
parser.add_argument('--set-properties', metavar='PROPERTY=VALUE,...', type=str,
@ -95,6 +103,9 @@ class ZfsAutobackup:
parser.add_argument('--encrypt', action='store_true',
help='Encrypt data after receiving it.')
parser.add_argument('--zfs-compressed', action='store_true',
help='Transfer blocks that already have zfs-compression as-is.')
parser.add_argument('--test', action='store_true',
help='dont change anything, just show what would be done (still does all read-only '
'operations)')
@ -108,17 +119,22 @@ class ZfsAutobackup:
parser.add_argument('--no-progress', action='store_true',
help=argparse.SUPPRESS) # needed to workaround a zfs recv -v bug
parser.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
help='pipe zfs send output through COMMAND')
parser.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
help='pipe zfs recv input through COMMAND')
parser.add_argument('--resume', action='store_true', help=argparse.SUPPRESS)
parser.add_argument('--raw', action='store_true', help=argparse.SUPPRESS)
parser.add_argument('--exclude-received', action='store_true',
help=argparse.SUPPRESS) # probably never needed anymore
#these things all do stuff by piping zfs send/recv IO
parser.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
help='pipe zfs send output through COMMAND (can be used multiple times)')
parser.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
help='pipe zfs recv input through COMMAND (can be used multiple times)')
parser.add_argument('--compress', metavar='TYPE', default=None, choices=compressors.choices(), help='Use compression during transfer, zstd-fast recommended. ({})'.format(", ".join(compressors.choices())))
parser.add_argument('--rate', metavar='DATARATE', default=None, help='Limit data transfer rate (e.g. 128K. requires mbuffer.)')
parser.add_argument('--buffer', metavar='SIZE', default=None, help='Add zfs send and recv buffers to smooth out IO bursts. (e.g. 128M. requires mbuffer)')
# note args is the only global variable we use, since its a global readonly setting anyway
args = parser.parse_args(argv)
@ -141,21 +157,31 @@ class ZfsAutobackup:
args.rollback = True
self.log = LogConsole(show_debug=self.args.debug, show_verbose=self.args.verbose, color=sys.stdout.isatty())
self.verbose(self.HEADER)
if args.resume:
self.verbose("NOTE: The --resume option isn't needed anymore (its autodetected now)")
self.warning("The --resume option isn't needed anymore (its autodetected now)")
if args.raw:
self.verbose(
"NOTE: The --raw option isn't needed anymore (its autodetected now). Also see --encrypt and --decrypt.")
self.warning(
"The --raw option isn't needed anymore (its autodetected now). Also see --encrypt and --decrypt.")
if args.target_path is not None and args.target_path[0] == "/":
self.log.error("Target should not start with a /")
sys.exit(255)
if args.compress and args.ssh_source is None and args.ssh_target is None:
self.warning("Using compression, but transfer is local.")
if args.compress and args.zfs_compressed:
self.warning("Using --compress with --zfs-compressed, might be inefficient.")
def verbose(self, txt):
self.log.verbose(txt)
def warning(self, txt):
self.log.warning(txt)
def error(self, txt):
self.log.error(txt)
@ -259,6 +285,63 @@ class ZfsAutobackup:
if self.args.progress:
self.clear_progress()
def get_send_pipes(self, logger):
"""determine the zfs send pipe"""
ret=[]
# IO buffer
if self.args.buffer:
logger("zfs send buffer : {}".format(self.args.buffer))
ret.extend([ ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m"+self.args.buffer ])
# custom pipes
for send_pipe in self.args.send_pipe:
ret.append(ExecuteNode.PIPE)
ret.extend(send_pipe.split(" "))
logger("zfs send custom pipe : {}".format(send_pipe))
# compression
if self.args.compress!=None:
ret.append(ExecuteNode.PIPE)
cmd=compressors.compress_cmd(self.args.compress)
ret.extend(cmd)
logger("zfs send compression : {}".format(" ".join(cmd)))
# transfer rate
if self.args.rate:
logger("zfs send transfer rate : {}".format(self.args.rate))
ret.extend([ ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m16M", "-R"+self.args.rate ])
return ret
def get_recv_pipes(self, logger):
ret=[]
# decompression
if self.args.compress!=None:
cmd=compressors.decompress_cmd(self.args.compress)
ret.extend(cmd)
ret.append(ExecuteNode.PIPE)
logger("zfs recv decompression : {}".format(" ".join(cmd)))
# custom pipes
for recv_pipe in self.args.recv_pipe:
ret.extend(recv_pipe.split(" "))
ret.append(ExecuteNode.PIPE)
logger("zfs recv custom pipe : {}".format(recv_pipe))
# IO buffer
if self.args.buffer:
#only add second buffer if its usefull. (e.g. non local transfer or other pipes active)
if self.args.ssh_source!=None or self.args.ssh_target!=None or self.args.recv_pipe or self.args.send_pipe or self.args.compress!=None:
logger("zfs recv buffer : {}".format(self.args.buffer))
ret.extend(["mbuffer", "-q", "-s128k", "-m"+self.args.buffer, ExecuteNode.PIPE ])
return ret
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
def sync_datasets(self, source_node, source_datasets, target_node):
"""Sync datasets, or thin-only on both sides
@ -267,6 +350,9 @@ class ZfsAutobackup:
:type source_node: ZfsNode
"""
send_pipes=self.get_send_pipes(source_node.verbose)
recv_pipes=self.get_recv_pipes(target_node.verbose)
fail_count = 0
count = 0
target_datasets = []
@ -304,8 +390,8 @@ class ZfsAutobackup:
also_other_snapshots=self.args.other_snapshots,
no_send=self.args.no_send,
destroy_incompatible=self.args.destroy_incompatible,
output_pipes=self.args.send_pipe, input_pipes=self.args.recv_pipe,
decrypt=self.args.decrypt, encrypt=self.args.encrypt)
send_pipes=send_pipes, recv_pipes=recv_pipes,
decrypt=self.args.decrypt, encrypt=self.args.encrypt, zfs_compressed=self.args.zfs_compressed )
except Exception as e:
fail_count = fail_count + 1
source_dataset.error("FAILED: " + str(e))
@ -372,10 +458,9 @@ class ZfsAutobackup:
def run(self):
try:
self.verbose(self.HEADER)
if self.args.test:
self.verbose("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
self.warning("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
################ create source zfsNode
self.set_title("Source settings")
@ -389,7 +474,7 @@ class ZfsAutobackup:
ssh_to=self.args.ssh_source, readonly=self.args.test,
debug_output=self.args.debug_output, description=description, thinner=source_thinner)
source_node.verbose(
"Selects all datasets that have property 'autobackup:{}=true' (or childs of datasets that have "
"Selects all datasets that have property 'autobackup:{}=true' (or children of datasets that have "
"'autobackup:{}=child')".format(
self.args.backup_name, self.args.backup_name))
@ -402,10 +487,10 @@ class ZfsAutobackup:
if self.args.ssh_source == self.args.ssh_target:
if self.args.target_path:
# target and source are the same, make sure to exclude target_path
source_node.verbose("NOTE: Source and target are on the same host, excluding target-path")
self.warning("Source and target are on the same host, excluding target-path from selection.")
exclude_paths.append(self.args.target_path)
else:
source_node.verbose("NOTE: Source and target are on the same host, excluding received datasets.")
self.warning("Source and target are on the same host, excluding received datasets from selection.")
exclude_received=True
@ -425,7 +510,9 @@ class ZfsAutobackup:
if not self.args.no_snapshot:
self.set_title("Snapshotting")
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(),
min_changed_bytes=self.args.min_change)
min_changed_bytes=self.args.min_change,
pre_snapshot_cmds=self.args.pre_snapshot_cmd,
post_snapshot_cmds=self.args.post_snapshot_cmd)
################# sync
# if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode)
@ -479,7 +566,7 @@ class ZfsAutobackup:
if self.args.test:
self.verbose("")
self.verbose("TEST MODE - DID NOT MAKE ANY CHANGES!")
self.warning("TEST MODE - DID NOT MAKE ANY CHANGES!")
return fail_count

View File

@ -503,14 +503,15 @@ class ZfsDataset:
return self.from_names(names[1:])
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, output_pipes):
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes, zfs_compressed):
"""returns a pipe with zfs send output for this snapshot
resume_token: resume sending from this token. (in that case we don't
need to know snapshot names)
Args:
:type output_pipes: list of str
:param send_pipes: output cmd array that will be added to actual zfs send command. (e.g. mbuffer or compression program)
:type send_pipes: list of str
:type features: list of str
:type prev_snapshot: ZfsDataset
:type resume_token: str
@ -529,7 +530,7 @@ class ZfsDataset:
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
cmd.append("--embed") # WRITE_EMBEDDED, more compact stream
if "-c" in self.zfs_node.supported_send_options:
if zfs_compressed and "-c" in self.zfs_node.supported_send_options:
cmd.append("--compressed") # use compressed WRITE records
# raw? (send over encrypted data in its original encrypted form without decrypting)
@ -556,23 +557,13 @@ class ZfsDataset:
cmd.append(self.name)
# #add custom output pipes?
# if output_pipes:
# #local so do our own piping
# if self.zfs_node.is_local():
# output_pipe = self.zfs_node.run(cmd)
# for pipe_cmd in output_pipes:
# output_pipe=self.zfs_node.run(pipe_cmd, inp=output_pipe, )
# return output_pipe
# #remote, so add with actual | and let remote shell handle it
# else:
# for pipe_cmd in output_pipes:
# cmd.append("|")
# cmd.extend(pipe_cmd)
cmd.extend(send_pipes)
return self.zfs_node.run(cmd, pipe=True, readonly=True)
output_pipe = self.zfs_node.run(cmd, pipe=True, readonly=True)
def recv_pipe(self, pipe, features, filter_properties=None, set_properties=None, ignore_exit_code=False):
return output_pipe
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False):
"""starts a zfs recv for this snapshot and uses pipe as input
note: you can it both on a snapshot or filesystem object. The
@ -580,6 +571,7 @@ class ZfsDataset:
differently.
Args:
:param recv_pipes: input cmd array that will be prepended to actual zfs recv command. (e.g. mbuffer or decompression program)
:type pipe: subprocess.pOpen
:type features: list of str
:type filter_properties: list of str
@ -596,6 +588,8 @@ class ZfsDataset:
# build target command
cmd = []
cmd.extend(recv_pipes)
cmd.extend(["zfs", "recv"])
# don't mount filesystem that is received
@ -640,15 +634,15 @@ class ZfsDataset:
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
filter_properties, set_properties, ignore_recv_exit_code, resume_token,
raw, send_properties, write_embedded, output_pipes, input_pipes):
raw, send_properties, write_embedded, send_pipes, recv_pipes, zfs_compressed):
"""transfer this snapshot to target_snapshot. specify prev_snapshot for
incremental transfer
connects a send_pipe() to recv_pipe()
Args:
:type output_pipes: list of str
:type input_pipes: list of str
:type send_pipes: list of str
:type recv_pipes: list of str
:type target_snapshot: ZfsDataset
:type features: list of str
:type prev_snapshot: ZfsDataset
@ -679,12 +673,13 @@ class ZfsDataset:
# do it
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, output_pipes=output_pipes)
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes)
def abort_resume(self):
"""abort current resume state"""
self.debug("Aborting resume")
self.zfs_node.run(["zfs", "recv", "-A", self.name])
def rollback(self):
@ -907,14 +902,18 @@ class ZfsDataset:
"""
if 'receive_resume_token' in target_dataset.properties:
resume_token = target_dataset.properties['receive_resume_token']
# not valid anymore?
resume_snapshot = self.get_resume_snapshot(resume_token)
if not resume_snapshot or start_snapshot.snapshot_name != resume_snapshot.snapshot_name:
target_dataset.verbose("Cant resume, resume token no longer valid.")
if start_snapshot==None:
target_dataset.verbose("Aborting resume, its obsolete.")
target_dataset.abort_resume()
else:
return resume_token
resume_token = target_dataset.properties['receive_resume_token']
# not valid anymore
resume_snapshot = self.get_resume_snapshot(resume_token)
if not resume_snapshot or start_snapshot.snapshot_name != resume_snapshot.snapshot_name:
target_dataset.verbose("Aborting resume, its no longer valid.")
target_dataset.abort_resume()
else:
return resume_token
def _plan_sync(self, target_dataset, also_other_snapshots):
"""plan where to start syncing and what to sync and what to keep
@ -969,13 +968,13 @@ class ZfsDataset:
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
no_send, destroy_incompatible, output_pipes, input_pipes):
no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed):
"""sync this dataset's snapshots to target_dataset, while also thinning
out old snapshots along the way.
Args:
:type output_pipes: list of str
:type input_pipes: list of str
:type send_pipes: list of str
:type recv_pipes: list of str
:type target_dataset: ZfsDataset
:type features: list of str
:type show_progress: bool
@ -1052,7 +1051,7 @@ class ZfsDataset:
filter_properties=active_filter_properties,
set_properties=active_set_properties,
ignore_recv_exit_code=ignore_recv_exit_code,
resume_token=resume_token, write_embedded=write_embedded,raw=raw, send_properties=send_properties, output_pipes=output_pipes, input_pipes=input_pipes)
resume_token=resume_token, write_embedded=write_embedded, raw=raw, send_properties=send_properties, send_pipes=send_pipes, recv_pipes=recv_pipes, zfs_compressed=zfs_compressed)
resume_token = None
@ -1081,7 +1080,7 @@ class ZfsDataset:
source_snapshot.debug("skipped (target doesn't need it)")
# was it actually a resume?
if resume_token:
target_dataset.debug("aborting resume, since we don't want that snapshot anymore")
target_dataset.verbose("Aborting resume, we dont want that snapshot anymore.")
target_dataset.abort_resume()
resume_token = None

View File

@ -1,6 +1,7 @@
# python 2 compatibility
from __future__ import print_function
import re
import shlex
import subprocess
import sys
import time
@ -120,7 +121,7 @@ class ZfsNode(ExecuteNode):
self._progress_total_bytes = int(progress_fields[2])
elif progress_fields[0] == 'incremental':
self._progress_total_bytes = int(progress_fields[3])
else:
elif progress_fields[1].isnumeric():
bytes_ = int(progress_fields[1])
if self._progress_total_bytes:
percentage = min(100, int(bytes_ * 100 / self._progress_total_bytes))
@ -151,6 +152,9 @@ class ZfsNode(ExecuteNode):
def error(self, txt):
self.logger.error("{} {}".format(self.description, txt))
def warning(self, txt):
self.logger.warning("{} {}".format(self.description, txt))
def debug(self, txt):
self.logger.debug("{} {}".format(self.description, txt))
@ -158,7 +162,7 @@ class ZfsNode(ExecuteNode):
"""determine uniq new snapshotname"""
return self.backup_name + "-" + time.strftime("%Y%m%d%H%M%S")
def consistent_snapshot(self, datasets, snapshot_name, min_changed_bytes):
def consistent_snapshot(self, datasets, snapshot_name, min_changed_bytes, pre_snapshot_cmds=[], post_snapshot_cmds=[]):
"""create a consistent (atomic) snapshot of specified datasets, per pool.
"""
@ -188,17 +192,32 @@ class ZfsNode(ExecuteNode):
self.verbose("No changes anywhere: not creating snapshots.")
return
# create consistent snapshot per pool
for (pool_name, snapshots) in pools.items():
cmd = ["zfs", "snapshot"]
try:
for cmd in pre_snapshot_cmds:
self.verbose("Running pre-snapshot-cmd")
self.run(cmd=shlex.split(cmd), readonly=False)
# create consistent snapshot per pool
for (pool_name, snapshots) in pools.items():
cmd = ["zfs", "snapshot"]
cmd.extend(map(lambda snapshot_: str(snapshot_), snapshots))
self.verbose("Creating snapshots {} in pool {}".format(snapshot_name, pool_name))
self.run(cmd, readonly=False)
finally:
for cmd in post_snapshot_cmds:
self.verbose("Running post-snapshot-cmd")
try:
self.run(cmd=shlex.split(cmd), readonly=False)
except Exception as e:
pass
cmd.extend(map(lambda snapshot_: str(snapshot_), snapshots))
self.verbose("Creating snapshots {} in pool {}".format(snapshot_name, pool_name))
self.run(cmd, readonly=False)
def selected_datasets(self, exclude_received, exclude_paths):
"""determine filesystems that should be backupped by looking at the special autobackup-property, systemwide
"""determine filesystems that should be backed up by looking at the special autobackup-property, systemwide
returns: list of ZfsDataset
"""

View File

@ -0,0 +1,69 @@
# Adopted from Syncoid :)
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
COMPRESS_CMDS = {
'gzip': {
'cmd': 'gzip',
'args': [ '-3' ],
'dcmd': 'zcat',
'dargs': [],
},
'pigz-fast': {
'cmd': 'pigz',
'args': [ '-3' ],
'dcmd': 'pigz',
'dargs': [ '-dc' ],
},
'pigz-slow': {
'cmd': 'pigz',
'args': [ '-9' ],
'dcmd': 'pigz',
'dargs': [ '-dc' ],
},
'zstd-fast': {
'cmd': 'zstdmt',
'args': [ '-3' ],
'dcmd': 'zstdmt',
'dargs': [ '-dc' ],
},
'zstd-slow': {
'cmd': 'zstdmt',
'args': [ '-19' ],
'dcmd': 'zstdmt',
'dargs': [ '-dc' ],
},
'xz': {
'cmd': 'xz',
'args': [],
'dcmd': 'xz',
'dargs': [ '-d' ],
},
'lzo': {
'cmd': 'lzop',
'args': [],
'dcmd': 'lzop',
'dargs': [ '-dfc' ],
},
'lz4': {
'cmd': 'lz4',
'args': [],
'dcmd': 'lz4',
'dargs': [ '-dc' ],
},
}
def compress_cmd(compressor):
ret=[ COMPRESS_CMDS[compressor]['cmd'] ]
ret.extend( COMPRESS_CMDS[compressor]['args'])
return ret
def decompress_cmd(compressor):
ret= [ COMPRESS_CMDS[compressor]['dcmd'] ]
ret.extend(COMPRESS_CMDS[compressor]['dargs'])
return ret
def choices():
return COMPRESS_CMDS.keys()