Compare commits

...

28 Commits

Author SHA1 Message Date
c5363a1538 pre/post cmd tests 2021-06-22 12:43:52 +02:00
119225ba5b bump 2021-06-18 17:27:41 +02:00
84437ee1d0 pre/post snapshot polishing. still needs test-scripts 2021-06-18 17:16:18 +02:00
1286bfafd0 Merge pull request #80 from tuffnatty/pre-post-snapshot-cmd
Add support for pre- and post-snapshot scripts (#39)
2021-06-17 11:48:09 +02:00
9fc2703638 Always call post-snapshot-cmd to clean up faster on snapshot failure 2021-06-17 12:38:16 +03:00
01dc65af96 Merge pull request #81 from tuffnatty/another-typo
Another typo fix
2021-06-17 11:21:08 +02:00
082153e0ce A more robust MySQL example 2021-06-17 09:52:18 +03:00
77f5474447 Fix tests. 2021-06-16 23:37:38 +03:00
55ff14f1d8 Allow multiple --pre/post-snapshot-cmd options. Add a usage example. 2021-06-16 23:19:29 +03:00
2acd26b304 Fix a typo 2021-06-16 19:52:06 +03:00
ec9459c1d2 Use shlex.split() for --pre-snapshot-cmd and --post-snapshot-cmd 2021-06-16 19:35:41 +03:00
233fd83ded Merge pull request #79 from tuffnatty/typos
Fix a couple of typos
2021-06-16 16:46:10 +02:00
37c24e092c Merge pull request #78 from tuffnatty/patch-1
Typo fix in argument name in warning message
2021-06-16 16:45:42 +02:00
b2bf11382c Add --pre-snapshot-cmd and --post-snapshot-cmd options 2021-06-16 16:12:20 +03:00
19b918044e Fix a couple of typos 2021-06-16 13:19:47 +03:00
67d9240e7b Typo fix in argument name in warning message 2021-06-16 00:32:31 +03:00
1a5e4a9cdd doc 2021-05-31 22:33:44 +02:00
31f8c359ff update version number 2021-05-31 22:19:28 +02:00
b50b7b7563 test 2021-05-31 22:10:56 +02:00
37f91e1e08 no longer use zfs send --compressed as default. uses --zfs-compressed to reenable it. fixes #77 . 2021-05-31 22:02:31 +02:00
a2f3aee5b1 test and fix resume edge-case 2021-05-26 23:06:19 +02:00
75d0a3cc7e rc1 2021-05-26 20:15:05 +02:00
98c55e2aa8 test fix 2021-05-26 18:07:17 +02:00
d478e22111 test rate limit 2021-05-26 18:04:05 +02:00
3a4953fbc5 doc 2021-05-26 17:57:38 +02:00
8d4e041a9c add mbuffer 2021-05-26 17:52:10 +02:00
8725d56bc9 also add buffer on receving side 2021-05-26 17:38:05 +02:00
ab0bfdbf4e update docs 2021-05-19 00:52:56 +02:00
12 changed files with 376 additions and 110 deletions

View File

@ -17,7 +17,7 @@ jobs:
- name: Prepare - name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux lzop pigz zstd gzip xz-utils lz4 && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls run: sudo apt update && sudo apt install zfsutils-linux lzop pigz zstd gzip xz-utils lz4 mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
- name: Regression test - name: Regression test
@ -39,7 +39,7 @@ jobs:
- name: Prepare - name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux python3-setuptools lzop pigz zstd gzip xz-utils liblz4-tool && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls run: sudo apt update && sudo apt install zfsutils-linux python3-setuptools lzop pigz zstd gzip xz-utils liblz4-tool mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
- name: Regression test - name: Regression test
@ -64,7 +64,7 @@ jobs:
python-version: '2.x' python-version: '2.x'
- name: Prepare - name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux python-setuptools lzop pigz zstd gzip xz-utils liblz4-tool && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls colorama run: sudo apt update && sudo apt install zfsutils-linux python-setuptools lzop pigz zstd gzip xz-utils liblz4-tool mbuffer && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls colorama
- name: Regression test - name: Regression test
run: sudo -E ./tests/run_tests run: sudo -E ./tests/run_tests

177
README.md
View File

@ -5,7 +5,7 @@
## Introduction ## Introduction
This is a tool I wrote to make replicating ZFS datasets easy and reliable. ZFS-autobackup tries to be the most reliable and easiest to use tool, while having all the features.
You can either use it as a **backup** tool, **replication** tool or **snapshot** tool. You can either use it as a **backup** tool, **replication** tool or **snapshot** tool.
@ -13,12 +13,10 @@ You can select what to backup by setting a custom `ZFS property`. This makes it
Other settings are just specified on the commandline: Simply setup and test your zfs-autobackup command and fix all the issues you might encounter. When you're done you can just copy/paste your command to a cron or script. Other settings are just specified on the commandline: Simply setup and test your zfs-autobackup command and fix all the issues you might encounter. When you're done you can just copy/paste your command to a cron or script.
Since its using ZFS commands, you can see what its actually doing by specifying `--debug`. This also helps a lot if you run into some strange problem or error. You can just copy-paste the command that fails and play around with it on the commandline. (something I missed in other tools) Since its using ZFS commands, you can see what it's actually doing by specifying `--debug`. This also helps a lot if you run into some strange problem or error. You can just copy-paste the command that fails and play around with it on the commandline. (something I missed in other tools)
An important feature thats missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your system. An important feature thats missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your system.
zfs-autobackup tries to be the easiest to use backup tool for zfs, with the most features.
## Features ## Features
* Works across operating systems: Tested with **Linux**, **FreeBSD/FreeNAS** and **SmartOS**. * Works across operating systems: Tested with **Linux**, **FreeBSD/FreeNAS** and **SmartOS**.
@ -34,6 +32,9 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs, with the most
* Can be scheduled via a simple cronjob or run directly from commandline. * Can be scheduled via a simple cronjob or run directly from commandline.
* Supports resuming of interrupted transfers. * Supports resuming of interrupted transfers.
* ZFS encryption support: Can decrypt / encrypt or even re-encrypt datasets during transfer. * ZFS encryption support: Can decrypt / encrypt or even re-encrypt datasets during transfer.
* Supports sending with compression. (Using pigz, zstd etc)
* IO buffering to speed up transfer.
* Bandwidth rate limiting.
* Multiple backups from and to the same datasets are no problem. * Multiple backups from and to the same datasets are no problem.
* Creates the snapshot before doing anything else. (assuring you at least have a snapshot if all else fails) * Creates the snapshot before doing anything else. (assuring you at least have a snapshot if all else fails)
* Checks everything but tries continue on non-fatal errors when possible. (Reports error-count when done) * Checks everything but tries continue on non-fatal errors when possible. (Reports error-count when done)
@ -44,10 +45,11 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs, with the most
* Automatic resuming of failed transfers. * Automatic resuming of failed transfers.
* Can continue from existing common snapshots. (e.g. easy migration) * Can continue from existing common snapshots. (e.g. easy migration)
* Gracefully handles datasets that no longer exist on source. * Gracefully handles datasets that no longer exist on source.
* Support for ZFS sending/receiving through custom pipes.
* Easy installation: * Easy installation:
* Just install zfs-autobackup via pip, or download it manually. * Just install zfs-autobackup via pip.
* Only needs to be installed on one side. * Only needs to be installed on one side.
* Written in python and uses zfs-commands, no 3rd party dependency's or libraries needed. * Written in python and uses zfs-commands, no special 3rd party dependency's or compiled libraries needed.
* No separate config files or properties. Just one zfs-autobackup command you can copy/paste in your backup script. * No separate config files or properties. Just one zfs-autobackup command you can copy/paste in your backup script.
## Installation ## Installation
@ -396,6 +398,47 @@ Note 2: Decide what you want at an early stage: If you change the --encrypt or -
I'll add some tips when the issues start to get in on github. :) I'll add some tips when the issues start to get in on github. :)
## Transfer buffering, compression and rate limiting.
If you're transferring over a slow link it might be useful to use `--compress=zstd-fast`. This will compress the data before sending, so it uses less bandwidth. An alternative to this is to use --zfs-compressed: This will transfer blocks that already have compression intact. (--compress will usually compress much better but uses much more resources. --zfs-compressed uses the least resources, but can be a disadvantage if you want to use a different compression method on the target.)
You can also limit the datarate by using the `--rate` option.
The `--buffer` option might also help since it acts as an IO buffer: zfs send can vary wildly between completely idle and huge bursts of data. When zfs send is idle, the buffer will continue transferring data over the slow link.
It's also possible to add custom send or receive pipes with `--send-pipe` and `--recv-pipe`.
These options all work together and the buffer on the receiving side is only added if appropriate. When all options are active:
#### On the sending side:
zfs send -> send buffer -> custom send pipes -> compression -> transfer rate limiter
#### On the receiving side:
decompression -> custom recv pipes -> buffer -> zfs recv
## Running custom commands before and after snapshotting
You can run commands before and after the snapshot to freeze databases to make the on for example to make the on-disk data consistent before snapshotting.
The commands will be executed on the source side. Use the `--pre-snapshot-cmd` and `--post-snapshot-cmd` options for this.
For example:
```sh
zfs-autobackup \
--pre-snapshot-cmd 'daemon -f jexec mysqljail1 mysql -s -e "set autocommit=0;flush logs;flush tables with read lock;\\! echo \$\$ > /tmp/mysql_lock.pid && sleep 60"' \
--pre-snapshot-cmd 'daemon -f jexec mysqljail2 mysql -s -e "set autocommit=0;flush logs;flush tables with read lock;\\! echo \$\$ > /tmp/mysql_lock.pid && sleep 60"' \
--post-snapshot-cmd 'pkill -F /jails/mysqljail1/tmp/mysql_lock.pid' \
--post-snapshot-cmd 'pkill -F /jails/mysqljail2/tmp/mysql_lock.pid' \
backupfs1
```
Failure handling during pre/post commands:
* If a pre-command fails, zfs-autobackup will exit with an error. (after executing the post-commands)
* All post-commands are always executed. Even if the pre-commands or actual snapshot have failed. This way you can be sure that stuff is always cleanedup and unfreezed.
## Tips ## Tips
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error. * Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
@ -408,6 +451,8 @@ I'll add some tips when the issues start to get in on github. :)
If you have a large number of datasets its important to keep the following tips in mind. If you have a large number of datasets its important to keep the following tips in mind.
Also it might help to use the --buffer option to add IO buffering during the data transfer. This might speed up things since it smooths out sudden IO bursts that are frequent during a zfs send or recv.
#### Some statistics #### Some statistics
To get some idea of how fast zfs-autobackup is, I did some test on my laptop, with a SKHynix_HFS512GD9TNI-L2B0B disk. I'm using zfs 2.0.2. To get some idea of how fast zfs-autobackup is, I did some test on my laptop, with a SKHynix_HFS512GD9TNI-L2B0B disk. I'm using zfs 2.0.2.
@ -468,17 +513,32 @@ Look in man ssh_config for many more options.
(NOTE: Quite a lot has changed since the current stable version 3.0. The page your are viewing is for upcoming version 3.1 which is still in beta.) (NOTE: Quite a lot has changed since the current stable version 3.0. The page your are viewing is for upcoming version 3.1 which is still in beta.)
```console ```console
usage: zfs-autobackup [-h] [--ssh-config CONFIG-FILE] [--ssh-source USER@HOST] [--ssh-target USER@HOST] [--keep-source SCHEDULE] [--keep-target SCHEDULE] [--other-snapshots] [--no-snapshot] [--no-send] usage: zfs-autobackup [-h] [--ssh-config CONFIG-FILE] [--ssh-source USER@HOST]
[--no-thinning] [--no-holds] [--min-change BYTES] [--allow-empty] [--ignore-replicated] [--strip-path N] [--clear-refreservation] [--clear-mountpoint] [--filter-properties PROPERY,...] [--ssh-target USER@HOST] [--keep-source SCHEDULE]
[--set-properties PROPERTY=VALUE,...] [--rollback] [--destroy-incompatible] [--destroy-missing SCHEDULE] [--ignore-transfer-errors] [--decrypt] [--encrypt] [--test] [--verbose] [--debug] [--keep-target SCHEDULE] [--pre-snapshot-cmd COMMAND]
[--debug-output] [--progress] [--send-pipe COMMAND] [--recv-pipe COMMAND] [--post-snapshot-cmd COMMAND] [--other-snapshots]
backup-name [target-path] [--no-snapshot] [--no-send] [--no-thinning] [--no-holds]
[--min-change BYTES] [--allow-empty]
[--ignore-replicated] [--strip-path N]
[--clear-refreservation] [--clear-mountpoint]
[--filter-properties PROPERY,...]
[--set-properties PROPERTY=VALUE,...] [--rollback]
[--destroy-incompatible] [--destroy-missing SCHEDULE]
[--ignore-transfer-errors] [--decrypt] [--encrypt]
[--test] [--verbose] [--debug] [--debug-output]
[--progress] [--send-pipe COMMAND] [--recv-pipe COMMAND]
[--compress TYPE] [--rate DATARATE] [--buffer SIZE]
backup-name [target-path]
zfs-autobackup v3.1-beta3 - Copyright 2020 E.H.Eefting (edwin@datux.nl) zfs-autobackup v3.1-beta6 - (c)2021 E.H.Eefting (edwin@datux.nl)
positional arguments: positional arguments:
backup-name Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup backup-name Name of the backup (you should set the zfs property
target-path Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate as snapshot-tool on source) "autobackup:backup-name" to true on filesystems you
want to backup
target-path Target ZFS filesystem (optional: if not specified,
zfs-autobackup will only operate as snapshot-tool on
source)
optional arguments: optional arguments:
-h, --help show this help message and exit -h, --help show this help message and exit
@ -489,41 +549,84 @@ optional arguments:
--ssh-target USER@HOST --ssh-target USER@HOST
Target host to push backup to. Target host to push backup to.
--keep-source SCHEDULE --keep-source SCHEDULE
Thinning schedule for old source snapshots. Default: 10,1d1w,1w1m,1m1y Thinning schedule for old source snapshots. Default:
10,1d1w,1w1m,1m1y
--keep-target SCHEDULE --keep-target SCHEDULE
Thinning schedule for old target snapshots. Default: 10,1d1w,1w1m,1m1y Thinning schedule for old target snapshots. Default:
--other-snapshots Send over other snapshots as well, not just the ones created by this tool. 10,1d1w,1w1m,1m1y
--no-snapshot Don't create new snapshots (useful for finishing uncompleted backups, or cleanups) --pre-snapshot-cmd COMMAND
--no-send Don't send snapshots (useful for cleanups, or if you want a serperate send-cronjob) Run COMMAND before snapshotting (can be used multiple
times.
--post-snapshot-cmd COMMAND
Run COMMAND after snapshotting (can be used multiple
times.
--other-snapshots Send over other snapshots as well, not just the ones
created by this tool.
--no-snapshot Don't create new snapshots (useful for finishing
uncompleted backups, or cleanups)
--no-send Don't send snapshots (useful for cleanups, or if you
want a serperate send-cronjob)
--no-thinning Do not destroy any snapshots. --no-thinning Do not destroy any snapshots.
--no-holds Don't hold snapshots. (Faster. Allows you to destroy common snapshot.) --no-holds Don't hold snapshots. (Faster. Allows you to destroy
--min-change BYTES Number of bytes written after which we consider a dataset changed (default 1) common snapshot.)
--allow-empty If nothing has changed, still create empty snapshots. (same as --min-change=0) --min-change BYTES Number of bytes written after which we consider a
--ignore-replicated Ignore datasets that seem to be replicated some other way. (No changes since lastest snapshot. Useful for proxmox HA replication) dataset changed (default 1)
--strip-path N Number of directories to strip from target path (use 1 when cloning zones between 2 SmartOS machines) --allow-empty If nothing has changed, still create empty snapshots.
(same as --min-change=0)
--ignore-replicated Ignore datasets that seem to be replicated some other
way. (No changes since lastest snapshot. Useful for
proxmox HA replication)
--strip-path N Number of directories to strip from target path (use 1
when cloning zones between 2 SmartOS machines)
--clear-refreservation --clear-refreservation
Filter "refreservation" property. (recommended, safes space. same as --filter-properties refreservation) Filter "refreservation" property. (recommended, safes
--clear-mountpoint Set property canmount=noauto for new datasets. (recommended, prevents mount conflicts. same as --set-properties canmount=noauto) space. same as --filter-properties refreservation)
--clear-mountpoint Set property canmount=noauto for new datasets.
(recommended, prevents mount conflicts. same as --set-
properties canmount=noauto)
--filter-properties PROPERY,... --filter-properties PROPERY,...
List of properties to "filter" when receiving filesystems. (you can still restore them with zfs inherit -S) List of properties to "filter" when receiving
filesystems. (you can still restore them with zfs
inherit -S)
--set-properties PROPERTY=VALUE,... --set-properties PROPERTY=VALUE,...
List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S) List of propererties to override when receiving
--rollback Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on) filesystems. (you can still restore them with zfs
inherit -S)
--rollback Rollback changes to the latest target snapshot before
starting. (normally you can prevent changes by setting
the readonly property on the target_path to on)
--destroy-incompatible --destroy-incompatible
Destroy incompatible snapshots on target. Use with care! (implies --rollback) Destroy incompatible snapshots on target. Use with
care! (implies --rollback)
--destroy-missing SCHEDULE --destroy-missing SCHEDULE
Destroy datasets on target that are missing on the source. Specify the time since the last snapshot, e.g: --destroy-missing 30d Destroy datasets on target that are missing on the
source. Specify the time since the last snapshot, e.g:
--destroy-missing 30d
--ignore-transfer-errors --ignore-transfer-errors
Ignore transfer errors (still checks if received filesystem exists. useful for acltype errors) Ignore transfer errors (still checks if received
filesystem exists. useful for acltype errors)
--decrypt Decrypt data before sending it over. --decrypt Decrypt data before sending it over.
--encrypt Encrypt data after receiving it. --encrypt Encrypt data after receiving it.
--test dont change anything, just show what would be done (still does all read-only operations) --test dont change anything, just show what would be done
(still does all read-only operations)
--verbose verbose output --verbose verbose output
--debug Show zfs commands that are executed, stops after an exception. --debug Show zfs commands that are executed, stops after an
exception.
--debug-output Show zfs commands and their output/exit codes. (noisy) --debug-output Show zfs commands and their output/exit codes. (noisy)
--progress show zfs progress output. Enabled automaticly on ttys. (use --no-progress to disable) --progress show zfs progress output. Enabled automaticly on ttys.
--send-pipe COMMAND pipe zfs send output through COMMAND (use --no-progress to disable)
--recv-pipe COMMAND pipe zfs recv input through COMMAND --send-pipe COMMAND pipe zfs send output through COMMAND (can be used
multiple times)
--recv-pipe COMMAND pipe zfs recv input through COMMAND (can be used
multiple times)
--compress TYPE Use compression during transfer, zstd-fast
recommended. (zstd-slow, xz, pigz-fast, lz4, pigz-
slow, zstd-fast, gzip, lzo)
--rate DATARATE Limit data transfer rate (e.g. 128K. requires
mbuffer.)
--buffer SIZE Add zfs send and recv buffers to smooth out IO bursts.
(e.g. 128M. requires mbuffer)
Full manual at: https://github.com/psy0rz/zfs_autobackup Full manual at: https://github.com/psy0rz/zfs_autobackup
``` ```

View File

@ -1,4 +1,6 @@
# To run tests as non-root, use this hack:
# chmod 4755 /usr/sbin/zpool /usr/sbin/zfs
import subprocess import subprocess
import random import random

View File

@ -21,7 +21,7 @@ class TestCmdPipe(unittest2.TestCase):
p=CmdPipe(readonly=False, inp="test") p=CmdPipe(readonly=False, inp="test")
err=[] err=[]
out=[] out=[]
p.add(CmdItem(["echo", "test"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0))) p.add(CmdItem(["cat"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
executed=p.execute(stdout_handler=lambda line: out.append(line)) executed=p.execute(stdout_handler=lambda line: out.append(line))
self.assertEqual(err, []) self.assertEqual(err, [])

View File

@ -227,11 +227,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# incremental, doesnt want previous anymore # incremental, doesnt want previous anymore
with patch('time.strftime', return_value="20101111000002"): with patch('time.strftime', return_value="20101111000002"):
self.assertFalse(ZfsAutobackup( self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-target=0 --debug --allow-empty".split(" ")).run()) "test test_target1 --no-progress --verbose --keep-target=0 --allow-empty".split(" ")).run())
print(buf.getvalue()) print(buf.getvalue())
self.assertIn(": aborting resume, since", buf.getvalue()) self.assertIn("Aborting resume, we dont want that snapshot anymore.", buf.getvalue())
r = shelltest("zfs list -H -o name -r -t all test_target1") r = shelltest("zfs list -H -o name -r -t all test_target1")
self.assertMultiLineEqual(r, """ self.assertMultiLineEqual(r, """
@ -247,6 +247,34 @@ test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000002 test_target1/test_source2/fs2/sub@test-20101111000002
""") """)
# test with empty snapshot list (this was a bug)
def test_abort_resume_emptysnapshotlist(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
# generate resume
with patch('time.strftime', return_value="20101111000001"):
self.generate_resume()
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
with OutputIO() as buf:
with redirect_stdout(buf):
# incremental, doesnt want previous anymore
with patch('time.strftime', return_value="20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --no-snapshot".split(
" ")).run())
print(buf.getvalue())
self.assertIn("Aborting resume, its obsolete", buf.getvalue())
def test_missing_common(self): def test_missing_common(self):
with patch('time.strftime', return_value="20101111000000"): with patch('time.strftime', return_value="20101111000000"):

View File

@ -47,3 +47,42 @@ class TestSendRecvPipes(unittest2.TestCase):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--compress="+compress]).run()) self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--compress="+compress]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub") shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
def test_buffer(self):
"""test different buffer configurations"""
with self.subTest("local local pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--buffer=1M" ]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote local pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--buffer=1M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("local remote pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-target=localhost", "--buffer=1M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote remote pipe"):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--ssh-target=localhost", "--buffer=1M"]).run())
def test_rate(self):
"""test rate limit"""
start=time.time()
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--rate=50k" ]).run())
#not a great way of verifying but it works.
self.assertGreater(time.time()-start, 5)

View File

@ -890,7 +890,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
n=ZfsNode("test",l) n=ZfsNode("test",l)
d=ZfsDataset(n,"test_source1@test") d=ZfsDataset(n,"test_source1@test")
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True) sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True, zfs_compressed=True)
with OutputIO() as buf: with OutputIO() as buf:

View File

@ -71,4 +71,11 @@ test_target1/b/test_source1/fs1/sub@test-20101111000000
test_target1/b/test_source2/fs2/sub@test-20101111000000 test_target1/b/test_source2/fs2/sub@test-20101111000000
test_target1/b/test_target1/a/test_source1/fs1@test-20101111000000 test_target1/b/test_target1/a/test_source1/fs1@test-20101111000000
test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000 test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
""") """)
def test_zfs_compressed(self):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --zfs-compressed".split(" ")).run())

View File

@ -1,6 +1,6 @@
from basetest import * from basetest import *
from zfs_autobackup.LogStub import LogStub from zfs_autobackup.LogStub import LogStub
from zfs_autobackup.ExecuteNode import ExecuteError
class TestZfsNode(unittest2.TestCase): class TestZfsNode(unittest2.TestCase):
@ -9,16 +9,15 @@ class TestZfsNode(unittest2.TestCase):
prepare_zpools() prepare_zpools()
# return super().setUp() # return super().setUp()
def test_consistent_snapshot(self): def test_consistent_snapshot(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
node=ZfsNode("test", logger, description=description) node = ZfsNode("test", logger, description=description)
with self.subTest("first snapshot"): with self.subTest("first snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",100000) node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1", 100000)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r,""" self.assertEqual(r, """
test_source1 test_source1
test_source1/fs1 test_source1/fs1
test_source1/fs1@test-1 test_source1/fs1@test-1
@ -33,11 +32,10 @@ test_source2/fs3/sub
test_target1 test_target1
""") """)
with self.subTest("second snapshot, no changes, no snapshot"): with self.subTest("second snapshot, no changes, no snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2",1) node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2", 1)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r,""" self.assertEqual(r, """
test_source1 test_source1
test_source1/fs1 test_source1/fs1
test_source1/fs1@test-1 test_source1/fs1@test-1
@ -53,9 +51,9 @@ test_target1
""") """)
with self.subTest("second snapshot, no changes, empty snapshot"): with self.subTest("second snapshot, no changes, empty snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2",0) node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2", 0)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r,""" self.assertEqual(r, """
test_source1 test_source1
test_source1/fs1 test_source1/fs1
test_source1/fs1@test-1 test_source1/fs1@test-1
@ -73,31 +71,82 @@ test_source2/fs3/sub
test_target1 test_target1
""") """)
def test_consistent_snapshot_prepostcmds(self):
logger = LogStub()
description = "[Source]"
node = ZfsNode("test", logger, description=description, debug_output=True)
with self.subTest("Test if all cmds are executed correctly (no failures)"):
with OutputIO() as buf:
with redirect_stdout(buf):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "echo pre2"],
post_snapshot_cmds=["echo post1 >&2", "echo post2 >&2"]
)
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
with self.subTest("Failure in the middle, only pre1 and both post1 and post2 should be executed, no snapshot should be attempted"):
with OutputIO() as buf:
with redirect_stdout(buf):
with self.assertRaises(ExecuteError):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "false", "echo pre2"],
post_snapshot_cmds=["echo post1", "false", "echo post2"]
)
print(buf.getvalue())
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertNotIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
with self.subTest("Snapshot fails"):
with OutputIO() as buf:
with redirect_stdout(buf):
with self.assertRaises(ExecuteError):
#same snapshot name as before so it fails
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "echo pre2"],
post_snapshot_cmds=["echo post1", "echo post2"]
)
print(buf.getvalue())
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
def test_getselected(self): def test_getselected(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
node=ZfsNode("test", logger, description=description) node = ZfsNode("test", logger, description=description)
s=pformat(node.selected_datasets(exclude_paths=[], exclude_received=False)) s = pformat(node.selected_datasets(exclude_paths=[], exclude_received=False))
print(s) print(s)
#basics # basics
self.assertEqual (s, """[(local): test_source1/fs1, self.assertEqual(s, """[(local): test_source1/fs1,
(local): test_source1/fs1/sub, (local): test_source1/fs1/sub,
(local): test_source2/fs2/sub]""") (local): test_source2/fs2/sub]""")
#caching, so expect same result after changing it # caching, so expect same result after changing it
subprocess.check_call("zfs set autobackup:test=true test_source2/fs3", shell=True) subprocess.check_call("zfs set autobackup:test=true test_source2/fs3", shell=True)
self.assertEqual (s, """[(local): test_source1/fs1, self.assertEqual(s, """[(local): test_source1/fs1,
(local): test_source1/fs1/sub, (local): test_source1/fs1/sub,
(local): test_source2/fs2/sub]""") (local): test_source2/fs2/sub]""")
def test_validcommand(self): def test_validcommand(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
node=ZfsNode("test", logger, description=description) node = ZfsNode("test", logger, description=description)
with self.subTest("test invalid option"): with self.subTest("test invalid option"):
self.assertFalse(node.valid_command(["zfs", "send", "--invalid-option", "nonexisting"])) self.assertFalse(node.valid_command(["zfs", "send", "--invalid-option", "nonexisting"]))
@ -105,21 +154,19 @@ test_target1
self.assertTrue(node.valid_command(["zfs", "send", "-v", "nonexisting"])) self.assertTrue(node.valid_command(["zfs", "send", "-v", "nonexisting"]))
def test_supportedsendoptions(self): def test_supportedsendoptions(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
node=ZfsNode("test", logger, description=description) node = ZfsNode("test", logger, description=description)
# -D propably always supported # -D propably always supported
self.assertGreater(len(node.supported_send_options),0) self.assertGreater(len(node.supported_send_options), 0)
def test_supportedrecvoptions(self): def test_supportedrecvoptions(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
#NOTE: this could hang via ssh if we dont close filehandles properly. (which was a previous bug) # NOTE: this could hang via ssh if we dont close filehandles properly. (which was a previous bug)
node=ZfsNode("test", logger, description=description, ssh_to='localhost') node = ZfsNode("test", logger, description=description, ssh_to='localhost')
self.assertIsInstance(node.supported_recv_options, list) self.assertIsInstance(node.supported_recv_options, list)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -16,7 +16,7 @@ from zfs_autobackup.ThinnerRule import ThinnerRule
class ZfsAutobackup: class ZfsAutobackup:
"""main class""" """main class"""
VERSION = "3.1-beta6" VERSION = "3.1-rc4"
HEADER = "zfs-autobackup v{} - (c)2021 E.H.Eefting (edwin@datux.nl)".format(VERSION) HEADER = "zfs-autobackup v{} - (c)2021 E.H.Eefting (edwin@datux.nl)".format(VERSION)
def __init__(self, argv, print_arguments=True): def __init__(self, argv, print_arguments=True):
@ -45,6 +45,10 @@ class ZfsAutobackup:
help='Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate ' help='Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate '
'as snapshot-tool on source)') 'as snapshot-tool on source)')
parser.add_argument('--pre-snapshot-cmd', metavar="COMMAND", default=[], action='append',
help='Run COMMAND before snapshotting (can be used multiple times.')
parser.add_argument('--post-snapshot-cmd', metavar="COMMAND", default=[], action='append',
help='Run COMMAND after snapshotting (can be used multiple times.')
parser.add_argument('--other-snapshots', action='store_true', parser.add_argument('--other-snapshots', action='store_true',
help='Send over other snapshots as well, not just the ones created by this tool.') help='Send over other snapshots as well, not just the ones created by this tool.')
parser.add_argument('--no-snapshot', action='store_true', parser.add_argument('--no-snapshot', action='store_true',
@ -75,7 +79,7 @@ class ZfsAutobackup:
parser.add_argument('--clear-mountpoint', action='store_true', parser.add_argument('--clear-mountpoint', action='store_true',
help='Set property canmount=noauto for new datasets. (recommended, prevents mount ' help='Set property canmount=noauto for new datasets. (recommended, prevents mount '
'conflicts. same as --set-properties canmount=noauto)') 'conflicts. same as --set-properties canmount=noauto)')
parser.add_argument('--filter-properties', metavar='PROPERY,...', type=str, parser.add_argument('--filter-properties', metavar='PROPERTY,...', type=str,
help='List of properties to "filter" when receiving filesystems. (you can still restore ' help='List of properties to "filter" when receiving filesystems. (you can still restore '
'them with zfs inherit -S)') 'them with zfs inherit -S)')
parser.add_argument('--set-properties', metavar='PROPERTY=VALUE,...', type=str, parser.add_argument('--set-properties', metavar='PROPERTY=VALUE,...', type=str,
@ -99,6 +103,9 @@ class ZfsAutobackup:
parser.add_argument('--encrypt', action='store_true', parser.add_argument('--encrypt', action='store_true',
help='Encrypt data after receiving it.') help='Encrypt data after receiving it.')
parser.add_argument('--zfs-compressed', action='store_true',
help='Transfer blocks that already have zfs-compression as-is.')
parser.add_argument('--test', action='store_true', parser.add_argument('--test', action='store_true',
help='dont change anything, just show what would be done (still does all read-only ' help='dont change anything, just show what would be done (still does all read-only '
'operations)') 'operations)')
@ -163,9 +170,12 @@ class ZfsAutobackup:
self.log.error("Target should not start with a /") self.log.error("Target should not start with a /")
sys.exit(255) sys.exit(255)
if args.compress and not args.ssh_source and not args.ssh_target: if args.compress and args.ssh_source is None and args.ssh_target is None:
self.warning("Using compression, but transfer is local.") self.warning("Using compression, but transfer is local.")
if args.compress and args.zfs_compressed:
self.warning("Using --compress with --zfs-compressed, might be inefficient.")
def verbose(self, txt): def verbose(self, txt):
self.log.verbose(txt) self.log.verbose(txt)
@ -323,6 +333,13 @@ class ZfsAutobackup:
ret.append(ExecuteNode.PIPE) ret.append(ExecuteNode.PIPE)
logger("zfs recv custom pipe : {}".format(recv_pipe)) logger("zfs recv custom pipe : {}".format(recv_pipe))
# IO buffer
if self.args.buffer:
#only add second buffer if its usefull. (e.g. non local transfer or other pipes active)
if self.args.ssh_source!=None or self.args.ssh_target!=None or self.args.recv_pipe or self.args.send_pipe or self.args.compress!=None:
logger("zfs recv buffer : {}".format(self.args.buffer))
ret.extend(["mbuffer", "-q", "-s128k", "-m"+self.args.buffer, ExecuteNode.PIPE ])
return ret return ret
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters: # NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
@ -374,7 +391,7 @@ class ZfsAutobackup:
no_send=self.args.no_send, no_send=self.args.no_send,
destroy_incompatible=self.args.destroy_incompatible, destroy_incompatible=self.args.destroy_incompatible,
send_pipes=send_pipes, recv_pipes=recv_pipes, send_pipes=send_pipes, recv_pipes=recv_pipes,
decrypt=self.args.decrypt, encrypt=self.args.encrypt, ) decrypt=self.args.decrypt, encrypt=self.args.encrypt, zfs_compressed=self.args.zfs_compressed )
except Exception as e: except Exception as e:
fail_count = fail_count + 1 fail_count = fail_count + 1
source_dataset.error("FAILED: " + str(e)) source_dataset.error("FAILED: " + str(e))
@ -457,7 +474,7 @@ class ZfsAutobackup:
ssh_to=self.args.ssh_source, readonly=self.args.test, ssh_to=self.args.ssh_source, readonly=self.args.test,
debug_output=self.args.debug_output, description=description, thinner=source_thinner) debug_output=self.args.debug_output, description=description, thinner=source_thinner)
source_node.verbose( source_node.verbose(
"Selects all datasets that have property 'autobackup:{}=true' (or childs of datasets that have " "Selects all datasets that have property 'autobackup:{}=true' (or children of datasets that have "
"'autobackup:{}=child')".format( "'autobackup:{}=child')".format(
self.args.backup_name, self.args.backup_name)) self.args.backup_name, self.args.backup_name))
@ -493,7 +510,9 @@ class ZfsAutobackup:
if not self.args.no_snapshot: if not self.args.no_snapshot:
self.set_title("Snapshotting") self.set_title("Snapshotting")
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(),
min_changed_bytes=self.args.min_change) min_changed_bytes=self.args.min_change,
pre_snapshot_cmds=self.args.pre_snapshot_cmd,
post_snapshot_cmds=self.args.post_snapshot_cmd)
################# sync ################# sync
# if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode) # if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode)

View File

@ -503,7 +503,7 @@ class ZfsDataset:
return self.from_names(names[1:]) return self.from_names(names[1:])
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes): def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes, zfs_compressed):
"""returns a pipe with zfs send output for this snapshot """returns a pipe with zfs send output for this snapshot
resume_token: resume sending from this token. (in that case we don't resume_token: resume sending from this token. (in that case we don't
@ -530,7 +530,7 @@ class ZfsDataset:
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options: if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
cmd.append("--embed") # WRITE_EMBEDDED, more compact stream cmd.append("--embed") # WRITE_EMBEDDED, more compact stream
if "-c" in self.zfs_node.supported_send_options: if zfs_compressed and "-c" in self.zfs_node.supported_send_options:
cmd.append("--compressed") # use compressed WRITE records cmd.append("--compressed") # use compressed WRITE records
# raw? (send over encrypted data in its original encrypted form without decrypting) # raw? (send over encrypted data in its original encrypted form without decrypting)
@ -634,7 +634,7 @@ class ZfsDataset:
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress, def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
filter_properties, set_properties, ignore_recv_exit_code, resume_token, filter_properties, set_properties, ignore_recv_exit_code, resume_token,
raw, send_properties, write_embedded, send_pipes, recv_pipes): raw, send_properties, write_embedded, send_pipes, recv_pipes, zfs_compressed):
"""transfer this snapshot to target_snapshot. specify prev_snapshot for """transfer this snapshot to target_snapshot. specify prev_snapshot for
incremental transfer incremental transfer
@ -673,12 +673,13 @@ class ZfsDataset:
# do it # do it
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot, pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes) resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties, target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes) set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes)
def abort_resume(self): def abort_resume(self):
"""abort current resume state""" """abort current resume state"""
self.debug("Aborting resume")
self.zfs_node.run(["zfs", "recv", "-A", self.name]) self.zfs_node.run(["zfs", "recv", "-A", self.name])
def rollback(self): def rollback(self):
@ -901,14 +902,18 @@ class ZfsDataset:
""" """
if 'receive_resume_token' in target_dataset.properties: if 'receive_resume_token' in target_dataset.properties:
resume_token = target_dataset.properties['receive_resume_token'] if start_snapshot==None:
# not valid anymore? target_dataset.verbose("Aborting resume, its obsolete.")
resume_snapshot = self.get_resume_snapshot(resume_token)
if not resume_snapshot or start_snapshot.snapshot_name != resume_snapshot.snapshot_name:
target_dataset.verbose("Cant resume, resume token no longer valid.")
target_dataset.abort_resume() target_dataset.abort_resume()
else: else:
return resume_token resume_token = target_dataset.properties['receive_resume_token']
# not valid anymore
resume_snapshot = self.get_resume_snapshot(resume_token)
if not resume_snapshot or start_snapshot.snapshot_name != resume_snapshot.snapshot_name:
target_dataset.verbose("Aborting resume, its no longer valid.")
target_dataset.abort_resume()
else:
return resume_token
def _plan_sync(self, target_dataset, also_other_snapshots): def _plan_sync(self, target_dataset, also_other_snapshots):
"""plan where to start syncing and what to sync and what to keep """plan where to start syncing and what to sync and what to keep
@ -963,7 +968,7 @@ class ZfsDataset:
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties, def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots, ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
no_send, destroy_incompatible, send_pipes, recv_pipes): no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed):
"""sync this dataset's snapshots to target_dataset, while also thinning """sync this dataset's snapshots to target_dataset, while also thinning
out old snapshots along the way. out old snapshots along the way.
@ -1046,7 +1051,7 @@ class ZfsDataset:
filter_properties=active_filter_properties, filter_properties=active_filter_properties,
set_properties=active_set_properties, set_properties=active_set_properties,
ignore_recv_exit_code=ignore_recv_exit_code, ignore_recv_exit_code=ignore_recv_exit_code,
resume_token=resume_token, write_embedded=write_embedded, raw=raw, send_properties=send_properties, send_pipes=send_pipes, recv_pipes=recv_pipes) resume_token=resume_token, write_embedded=write_embedded, raw=raw, send_properties=send_properties, send_pipes=send_pipes, recv_pipes=recv_pipes, zfs_compressed=zfs_compressed)
resume_token = None resume_token = None
@ -1075,7 +1080,7 @@ class ZfsDataset:
source_snapshot.debug("skipped (target doesn't need it)") source_snapshot.debug("skipped (target doesn't need it)")
# was it actually a resume? # was it actually a resume?
if resume_token: if resume_token:
target_dataset.debug("aborting resume, since we don't want that snapshot anymore") target_dataset.verbose("Aborting resume, we dont want that snapshot anymore.")
target_dataset.abort_resume() target_dataset.abort_resume()
resume_token = None resume_token = None

View File

@ -1,6 +1,7 @@
# python 2 compatibility # python 2 compatibility
from __future__ import print_function from __future__ import print_function
import re import re
import shlex
import subprocess import subprocess
import sys import sys
import time import time
@ -161,7 +162,7 @@ class ZfsNode(ExecuteNode):
"""determine uniq new snapshotname""" """determine uniq new snapshotname"""
return self.backup_name + "-" + time.strftime("%Y%m%d%H%M%S") return self.backup_name + "-" + time.strftime("%Y%m%d%H%M%S")
def consistent_snapshot(self, datasets, snapshot_name, min_changed_bytes): def consistent_snapshot(self, datasets, snapshot_name, min_changed_bytes, pre_snapshot_cmds=[], post_snapshot_cmds=[]):
"""create a consistent (atomic) snapshot of specified datasets, per pool. """create a consistent (atomic) snapshot of specified datasets, per pool.
""" """
@ -191,17 +192,32 @@ class ZfsNode(ExecuteNode):
self.verbose("No changes anywhere: not creating snapshots.") self.verbose("No changes anywhere: not creating snapshots.")
return return
# create consistent snapshot per pool try:
for (pool_name, snapshots) in pools.items(): for cmd in pre_snapshot_cmds:
cmd = ["zfs", "snapshot"] self.verbose("Running pre-snapshot-cmd")
self.run(cmd=shlex.split(cmd), readonly=False)
# create consistent snapshot per pool
for (pool_name, snapshots) in pools.items():
cmd = ["zfs", "snapshot"]
cmd.extend(map(lambda snapshot_: str(snapshot_), snapshots))
self.verbose("Creating snapshots {} in pool {}".format(snapshot_name, pool_name))
self.run(cmd, readonly=False)
finally:
for cmd in post_snapshot_cmds:
self.verbose("Running post-snapshot-cmd")
try:
self.run(cmd=shlex.split(cmd), readonly=False)
except Exception as e:
pass
cmd.extend(map(lambda snapshot_: str(snapshot_), snapshots))
self.verbose("Creating snapshots {} in pool {}".format(snapshot_name, pool_name))
self.run(cmd, readonly=False)
def selected_datasets(self, exclude_received, exclude_paths): def selected_datasets(self, exclude_received, exclude_paths):
"""determine filesystems that should be backupped by looking at the special autobackup-property, systemwide """determine filesystems that should be backed up by looking at the special autobackup-property, systemwide
returns: list of ZfsDataset returns: list of ZfsDataset
""" """