Compare commits

...

25 Commits

Author SHA1 Message Date
ce987328d9 update doc 2021-08-18 10:32:41 +02:00
9a902f0f38 final 3.1 release 2021-08-18 10:28:10 +02:00
ee2c074539 it seems the amount of changed bytes has become bigger on my prox. (approx 230k) 2021-07-07 13:20:04 +02:00
77f1c16414 fix #84 2021-07-03 14:31:34 +02:00
c5363a1538 pre/post cmd tests 2021-06-22 12:43:52 +02:00
119225ba5b bump 2021-06-18 17:27:41 +02:00
84437ee1d0 pre/post snapshot polishing. still needs test-scripts 2021-06-18 17:16:18 +02:00
1286bfafd0 Merge pull request #80 from tuffnatty/pre-post-snapshot-cmd
Add support for pre- and post-snapshot scripts (#39)
2021-06-17 11:48:09 +02:00
9fc2703638 Always call post-snapshot-cmd to clean up faster on snapshot failure 2021-06-17 12:38:16 +03:00
01dc65af96 Merge pull request #81 from tuffnatty/another-typo
Another typo fix
2021-06-17 11:21:08 +02:00
082153e0ce A more robust MySQL example 2021-06-17 09:52:18 +03:00
77f5474447 Fix tests. 2021-06-16 23:37:38 +03:00
55ff14f1d8 Allow multiple --pre/post-snapshot-cmd options. Add a usage example. 2021-06-16 23:19:29 +03:00
2acd26b304 Fix a typo 2021-06-16 19:52:06 +03:00
ec9459c1d2 Use shlex.split() for --pre-snapshot-cmd and --post-snapshot-cmd 2021-06-16 19:35:41 +03:00
233fd83ded Merge pull request #79 from tuffnatty/typos
Fix a couple of typos
2021-06-16 16:46:10 +02:00
37c24e092c Merge pull request #78 from tuffnatty/patch-1
Typo fix in argument name in warning message
2021-06-16 16:45:42 +02:00
b2bf11382c Add --pre-snapshot-cmd and --post-snapshot-cmd options 2021-06-16 16:12:20 +03:00
19b918044e Fix a couple of typos 2021-06-16 13:19:47 +03:00
67d9240e7b Typo fix in argument name in warning message 2021-06-16 00:32:31 +03:00
1a5e4a9cdd doc 2021-05-31 22:33:44 +02:00
31f8c359ff update version number 2021-05-31 22:19:28 +02:00
b50b7b7563 test 2021-05-31 22:10:56 +02:00
37f91e1e08 no longer use zfs send --compressed as default. uses --zfs-compressed to reenable it. fixes #77 . 2021-05-31 22:02:31 +02:00
a2f3aee5b1 test and fix resume edge-case 2021-05-26 23:06:19 +02:00
9 changed files with 290 additions and 107 deletions

View File

@ -400,7 +400,7 @@ I'll add some tips when the issues start to get in on github. :)
## Transfer buffering, compression and rate limiting. ## Transfer buffering, compression and rate limiting.
If you're transferring over a slow link it might be useful to use `--compress=zstd-fast`. This will compres the data before sending, so it uses less bandwidth. If you're transferring over a slow link it might be useful to use `--compress=zstd-fast`. This will compress the data before sending, so it uses less bandwidth. An alternative to this is to use --zfs-compressed: This will transfer blocks that already have compression intact. (--compress will usually compress much better but uses much more resources. --zfs-compressed uses the least resources, but can be a disadvantage if you want to use a different compression method on the target.)
You can also limit the datarate by using the `--rate` option. You can also limit the datarate by using the `--rate` option.
@ -417,6 +417,28 @@ zfs send -> send buffer -> custom send pipes -> compression -> transfer rate lim
#### On the receiving side: #### On the receiving side:
decompression -> custom recv pipes -> buffer -> zfs recv decompression -> custom recv pipes -> buffer -> zfs recv
## Running custom commands before and after snapshotting
You can run commands before and after the snapshot to freeze databases to make the on for example to make the on-disk data consistent before snapshotting.
The commands will be executed on the source side. Use the `--pre-snapshot-cmd` and `--post-snapshot-cmd` options for this.
For example:
```sh
zfs-autobackup \
--pre-snapshot-cmd 'daemon -f jexec mysqljail1 mysql -s -e "set autocommit=0;flush logs;flush tables with read lock;\\! echo \$\$ > /tmp/mysql_lock.pid && sleep 60"' \
--pre-snapshot-cmd 'daemon -f jexec mysqljail2 mysql -s -e "set autocommit=0;flush logs;flush tables with read lock;\\! echo \$\$ > /tmp/mysql_lock.pid && sleep 60"' \
--post-snapshot-cmd 'pkill -F /jails/mysqljail1/tmp/mysql_lock.pid' \
--post-snapshot-cmd 'pkill -F /jails/mysqljail2/tmp/mysql_lock.pid' \
backupfs1
```
Failure handling during pre/post commands:
* If a pre-command fails, zfs-autobackup will exit with an error. (after executing the post-commands)
* All post-commands are always executed. Even if the pre-commands or actual snapshot have failed. This way you can be sure that stuff is always cleanedup and unfreezed.
## Tips ## Tips
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error. * Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
@ -488,26 +510,25 @@ Look in man ssh_config for many more options.
## Usage ## Usage
(NOTE: Quite a lot has changed since the current stable version 3.0. The page your are viewing is for upcoming version 3.1 which is still in beta.)
```console ```console
usage: zfs-autobackup [-h] [--ssh-config CONFIG-FILE] [--ssh-source USER@HOST] usage: zfs-autobackup [-h] [--ssh-config CONFIG-FILE] [--ssh-source USER@HOST]
[--ssh-target USER@HOST] [--keep-source SCHEDULE] [--ssh-target USER@HOST] [--keep-source SCHEDULE]
[--keep-target SCHEDULE] [--other-snapshots] [--keep-target SCHEDULE] [--pre-snapshot-cmd COMMAND]
[--no-snapshot] [--no-send] [--no-thinning] [--no-holds] [--post-snapshot-cmd COMMAND] [--other-snapshots]
[--min-change BYTES] [--allow-empty] [--no-snapshot] [--no-send] [--no-thinning] [--no-holds]
[--ignore-replicated] [--strip-path N] [--min-change BYTES] [--allow-empty] [--ignore-replicated]
[--clear-refreservation] [--clear-mountpoint] [--strip-path N] [--clear-refreservation]
[--filter-properties PROPERY,...] [--clear-mountpoint] [--filter-properties PROPERTY,...]
[--set-properties PROPERTY=VALUE,...] [--rollback] [--set-properties PROPERTY=VALUE,...] [--rollback]
[--destroy-incompatible] [--destroy-missing SCHEDULE] [--destroy-incompatible] [--destroy-missing SCHEDULE]
[--ignore-transfer-errors] [--decrypt] [--encrypt] [--ignore-transfer-errors] [--decrypt] [--encrypt]
[--test] [--verbose] [--debug] [--debug-output] [--zfs-compressed] [--test] [--verbose] [--debug]
[--progress] [--send-pipe COMMAND] [--recv-pipe COMMAND] [--debug-output] [--progress] [--send-pipe COMMAND]
[--compress TYPE] [--rate DATARATE] [--buffer SIZE] [--recv-pipe COMMAND] [--compress TYPE] [--rate DATARATE]
backup-name [target-path] [--buffer SIZE]
backup-name [target-path]
zfs-autobackup v3.1-beta6 - (c)2021 E.H.Eefting (edwin@datux.nl) zfs-autobackup v3.1 - (c)2021 E.H.Eefting (edwin@datux.nl)
positional arguments: positional arguments:
backup-name Name of the backup (you should set the zfs property backup-name Name of the backup (you should set the zfs property
@ -531,6 +552,12 @@ optional arguments:
--keep-target SCHEDULE --keep-target SCHEDULE
Thinning schedule for old target snapshots. Default: Thinning schedule for old target snapshots. Default:
10,1d1w,1w1m,1m1y 10,1d1w,1w1m,1m1y
--pre-snapshot-cmd COMMAND
Run COMMAND before snapshotting (can be used multiple
times.
--post-snapshot-cmd COMMAND
Run COMMAND after snapshotting (can be used multiple
times.
--other-snapshots Send over other snapshots as well, not just the ones --other-snapshots Send over other snapshots as well, not just the ones
created by this tool. created by this tool.
--no-snapshot Don't create new snapshots (useful for finishing --no-snapshot Don't create new snapshots (useful for finishing
@ -547,7 +574,6 @@ optional arguments:
--ignore-replicated Ignore datasets that seem to be replicated some other --ignore-replicated Ignore datasets that seem to be replicated some other
way. (No changes since lastest snapshot. Useful for way. (No changes since lastest snapshot. Useful for
proxmox HA replication) proxmox HA replication)
--strip-path N Number of directories to strip from target path (use 1 --strip-path N Number of directories to strip from target path (use 1
when cloning zones between 2 SmartOS machines) when cloning zones between 2 SmartOS machines)
--clear-refreservation --clear-refreservation
@ -556,7 +582,7 @@ optional arguments:
--clear-mountpoint Set property canmount=noauto for new datasets. --clear-mountpoint Set property canmount=noauto for new datasets.
(recommended, prevents mount conflicts. same as --set- (recommended, prevents mount conflicts. same as --set-
properties canmount=noauto) properties canmount=noauto)
--filter-properties PROPERY,... --filter-properties PROPERTY,...
List of properties to "filter" when receiving List of properties to "filter" when receiving
filesystems. (you can still restore them with zfs filesystems. (you can still restore them with zfs
inherit -S) inherit -S)
@ -579,6 +605,8 @@ optional arguments:
filesystem exists. useful for acltype errors) filesystem exists. useful for acltype errors)
--decrypt Decrypt data before sending it over. --decrypt Decrypt data before sending it over.
--encrypt Encrypt data after receiving it. --encrypt Encrypt data after receiving it.
--zfs-compressed Transfer blocks that already have zfs-compression as-
is.
--test dont change anything, just show what would be done --test dont change anything, just show what would be done
(still does all read-only operations) (still does all read-only operations)
--verbose verbose output --verbose verbose output
@ -592,14 +620,15 @@ optional arguments:
--recv-pipe COMMAND pipe zfs recv input through COMMAND (can be used --recv-pipe COMMAND pipe zfs recv input through COMMAND (can be used
multiple times) multiple times)
--compress TYPE Use compression during transfer, zstd-fast --compress TYPE Use compression during transfer, zstd-fast
recommended. (zstd-slow, xz, pigz-fast, lz4, pigz- recommended. (xz, pigz-slow, zstd-slow, zstd-fast,
slow, zstd-fast, gzip, lzo) lzo, gzip, pigz-fast, lz4)
--rate DATARATE Limit data transfer rate (e.g. 128K. requires --rate DATARATE Limit data transfer rate (e.g. 128K. requires
mbuffer.) mbuffer.)
--buffer SIZE Add zfs send and recv buffers to smooth out IO bursts. --buffer SIZE Add zfs send and recv buffers to smooth out IO bursts.
(e.g. 128M. requires mbuffer) (e.g. 128M. requires mbuffer)
Full manual at: https://github.com/psy0rz/zfs_autobackup Full manual at: https://github.com/psy0rz/zfs_autobackup
``` ```
## Troubleshooting ## Troubleshooting
@ -725,7 +754,7 @@ for HOST in $HOSTS; do
ssh $HOST "zfs set autobackup:data_$NAME=child rpool/data" ssh $HOST "zfs set autobackup:data_$NAME=child rpool/data"
#backup data filesystems to a common directory #backup data filesystems to a common directory
zfs-autobackup --keep-source=1d1w,1w1m --ssh-source $HOST data_$NAME $TARGET/data --clear-mountpoint --clear-refreservation --ignore-transfer-errors --strip-path 2 --verbose --ignore-replicated --min-change 200000 --no-holds $@ zfs-autobackup --keep-source=1d1w,1w1m --ssh-source $HOST data_$NAME $TARGET/data --clear-mountpoint --clear-refreservation --ignore-transfer-errors --strip-path 2 --verbose --ignore-replicated --min-change 300000 --no-holds $@
zabbix-job-status backup_$HOST""_data_$NAME daily $? >/dev/null 2>/dev/null zabbix-job-status backup_$HOST""_data_$NAME daily $? >/dev/null 2>/dev/null

View File

@ -227,11 +227,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# incremental, doesnt want previous anymore # incremental, doesnt want previous anymore
with patch('time.strftime', return_value="20101111000002"): with patch('time.strftime', return_value="20101111000002"):
self.assertFalse(ZfsAutobackup( self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-target=0 --debug --allow-empty".split(" ")).run()) "test test_target1 --no-progress --verbose --keep-target=0 --allow-empty".split(" ")).run())
print(buf.getvalue()) print(buf.getvalue())
self.assertIn(": aborting resume, since", buf.getvalue()) self.assertIn("Aborting resume, we dont want that snapshot anymore.", buf.getvalue())
r = shelltest("zfs list -H -o name -r -t all test_target1") r = shelltest("zfs list -H -o name -r -t all test_target1")
self.assertMultiLineEqual(r, """ self.assertMultiLineEqual(r, """
@ -247,6 +247,34 @@ test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000002 test_target1/test_source2/fs2/sub@test-20101111000002
""") """)
# test with empty snapshot list (this was a bug)
def test_abort_resume_emptysnapshotlist(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
# generate resume
with patch('time.strftime', return_value="20101111000001"):
self.generate_resume()
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
with OutputIO() as buf:
with redirect_stdout(buf):
# incremental, doesnt want previous anymore
with patch('time.strftime', return_value="20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --no-snapshot".split(
" ")).run())
print(buf.getvalue())
self.assertIn("Aborting resume, its obsolete", buf.getvalue())
def test_missing_common(self): def test_missing_common(self):
with patch('time.strftime', return_value="20101111000000"): with patch('time.strftime', return_value="20101111000000"):

View File

@ -8,12 +8,51 @@ class TestZfsNode(unittest2.TestCase):
prepare_zpools() prepare_zpools()
self.longMessage=True self.longMessage=True
# #resume initial backup def test_keepsource0target10queuedsend(self):
# def test_keepsource0(self): """Test if thinner doesnt destroy too much early on if there are no common snapshots YET. Issue #84"""
# #somehow only specifying --allow-empty --keep-source 0 failed: with patch('time.strftime', return_value="20101111000000"):
# with patch('time.strftime', return_value="20101111000000"): self.assertFalse(ZfsAutobackup(
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run()) "test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
" ")).run())
# with patch('time.strftime', return_value="20101111000001"): with patch('time.strftime', return_value="20101111000001"):
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run()) self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
" ")).run())
with patch('time.strftime', return_value="20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty".split(
" ")).run())
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertMultiLineEqual(r, """
test_source1
test_source1/fs1
test_source1/fs1@test-20101111000002
test_source1/fs1/sub
test_source1/fs1/sub@test-20101111000002
test_source2
test_source2/fs2
test_source2/fs2/sub
test_source2/fs2/sub@test-20101111000002
test_source2/fs3
test_source2/fs3/sub
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20101111000000
test_target1/test_source1/fs1@test-20101111000001
test_target1/test_source1/fs1@test-20101111000002
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20101111000000
test_target1/test_source1/fs1/sub@test-20101111000001
test_target1/test_source1/fs1/sub@test-20101111000002
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000000
test_target1/test_source2/fs2/sub@test-20101111000001
test_target1/test_source2/fs2/sub@test-20101111000002
""")

View File

@ -312,8 +312,6 @@ test_target1/test_source2/fs2
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send --no-progress".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
#(only parents are created )
#TODO: it probably shouldn't create these
self.assertMultiLineEqual(r,""" self.assertMultiLineEqual(r,"""
test_source1 test_source1
test_source1/fs1 test_source1/fs1
@ -337,8 +335,6 @@ test_target1
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --ignore-replicated".split(" ")).run()) self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --ignore-replicated".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
#(only parents are created )
#TODO: it probably shouldn't create these
self.assertMultiLineEqual(r,""" self.assertMultiLineEqual(r,"""
test_source1 test_source1
test_source1/fs1 test_source1/fs1
@ -851,7 +847,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
""") """)
# make snapshot 4, since we used no-holds, it will delete 3 on the source, breaking the backup # run with snapshot-only for 4, since we used no-holds, it will delete 3 on the source, breaking the backup
with patch('time.strftime', return_value="20101111000004"): with patch('time.strftime', return_value="20101111000004"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run()) self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
@ -890,7 +886,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
n=ZfsNode("test",l) n=ZfsNode("test",l)
d=ZfsDataset(n,"test_source1@test") d=ZfsDataset(n,"test_source1@test")
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True) sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True, zfs_compressed=True)
with OutputIO() as buf: with OutputIO() as buf:

View File

@ -71,4 +71,11 @@ test_target1/b/test_source1/fs1/sub@test-20101111000000
test_target1/b/test_source2/fs2/sub@test-20101111000000 test_target1/b/test_source2/fs2/sub@test-20101111000000
test_target1/b/test_target1/a/test_source1/fs1@test-20101111000000 test_target1/b/test_target1/a/test_source1/fs1@test-20101111000000
test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000 test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
""") """)
def test_zfs_compressed(self):
with patch('time.strftime', return_value="20101111000000"):
self.assertFalse(
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --zfs-compressed".split(" ")).run())

View File

@ -1,6 +1,6 @@
from basetest import * from basetest import *
from zfs_autobackup.LogStub import LogStub from zfs_autobackup.LogStub import LogStub
from zfs_autobackup.ExecuteNode import ExecuteError
class TestZfsNode(unittest2.TestCase): class TestZfsNode(unittest2.TestCase):
@ -9,16 +9,15 @@ class TestZfsNode(unittest2.TestCase):
prepare_zpools() prepare_zpools()
# return super().setUp() # return super().setUp()
def test_consistent_snapshot(self): def test_consistent_snapshot(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
node=ZfsNode("test", logger, description=description) node = ZfsNode("test", logger, description=description)
with self.subTest("first snapshot"): with self.subTest("first snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",100000) node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1", 100000)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r,""" self.assertEqual(r, """
test_source1 test_source1
test_source1/fs1 test_source1/fs1
test_source1/fs1@test-1 test_source1/fs1@test-1
@ -33,11 +32,10 @@ test_source2/fs3/sub
test_target1 test_target1
""") """)
with self.subTest("second snapshot, no changes, no snapshot"): with self.subTest("second snapshot, no changes, no snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2",1) node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2", 1)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r,""" self.assertEqual(r, """
test_source1 test_source1
test_source1/fs1 test_source1/fs1
test_source1/fs1@test-1 test_source1/fs1@test-1
@ -53,9 +51,9 @@ test_target1
""") """)
with self.subTest("second snapshot, no changes, empty snapshot"): with self.subTest("second snapshot, no changes, empty snapshot"):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2",0) node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2", 0)
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS) r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r,""" self.assertEqual(r, """
test_source1 test_source1
test_source1/fs1 test_source1/fs1
test_source1/fs1@test-1 test_source1/fs1@test-1
@ -73,31 +71,82 @@ test_source2/fs3/sub
test_target1 test_target1
""") """)
def test_consistent_snapshot_prepostcmds(self):
logger = LogStub()
description = "[Source]"
node = ZfsNode("test", logger, description=description, debug_output=True)
with self.subTest("Test if all cmds are executed correctly (no failures)"):
with OutputIO() as buf:
with redirect_stdout(buf):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "echo pre2"],
post_snapshot_cmds=["echo post1 >&2", "echo post2 >&2"]
)
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
with self.subTest("Failure in the middle, only pre1 and both post1 and post2 should be executed, no snapshot should be attempted"):
with OutputIO() as buf:
with redirect_stdout(buf):
with self.assertRaises(ExecuteError):
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "false", "echo pre2"],
post_snapshot_cmds=["echo post1", "false", "echo post2"]
)
print(buf.getvalue())
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertNotIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
with self.subTest("Snapshot fails"):
with OutputIO() as buf:
with redirect_stdout(buf):
with self.assertRaises(ExecuteError):
#same snapshot name as before so it fails
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",
0,
pre_snapshot_cmds=["echo pre1", "echo pre2"],
post_snapshot_cmds=["echo post1", "echo post2"]
)
print(buf.getvalue())
self.assertIn("STDOUT > pre1", buf.getvalue())
self.assertIn("STDOUT > pre2", buf.getvalue())
self.assertIn("STDOUT > post1", buf.getvalue())
self.assertIn("STDOUT > post2", buf.getvalue())
def test_getselected(self): def test_getselected(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
node=ZfsNode("test", logger, description=description) node = ZfsNode("test", logger, description=description)
s=pformat(node.selected_datasets(exclude_paths=[], exclude_received=False)) s = pformat(node.selected_datasets(exclude_paths=[], exclude_received=False))
print(s) print(s)
#basics # basics
self.assertEqual (s, """[(local): test_source1/fs1, self.assertEqual(s, """[(local): test_source1/fs1,
(local): test_source1/fs1/sub, (local): test_source1/fs1/sub,
(local): test_source2/fs2/sub]""") (local): test_source2/fs2/sub]""")
#caching, so expect same result after changing it # caching, so expect same result after changing it
subprocess.check_call("zfs set autobackup:test=true test_source2/fs3", shell=True) subprocess.check_call("zfs set autobackup:test=true test_source2/fs3", shell=True)
self.assertEqual (s, """[(local): test_source1/fs1, self.assertEqual(s, """[(local): test_source1/fs1,
(local): test_source1/fs1/sub, (local): test_source1/fs1/sub,
(local): test_source2/fs2/sub]""") (local): test_source2/fs2/sub]""")
def test_validcommand(self): def test_validcommand(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
node=ZfsNode("test", logger, description=description) node = ZfsNode("test", logger, description=description)
with self.subTest("test invalid option"): with self.subTest("test invalid option"):
self.assertFalse(node.valid_command(["zfs", "send", "--invalid-option", "nonexisting"])) self.assertFalse(node.valid_command(["zfs", "send", "--invalid-option", "nonexisting"]))
@ -105,21 +154,19 @@ test_target1
self.assertTrue(node.valid_command(["zfs", "send", "-v", "nonexisting"])) self.assertTrue(node.valid_command(["zfs", "send", "-v", "nonexisting"]))
def test_supportedsendoptions(self): def test_supportedsendoptions(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
node=ZfsNode("test", logger, description=description) node = ZfsNode("test", logger, description=description)
# -D propably always supported # -D propably always supported
self.assertGreater(len(node.supported_send_options),0) self.assertGreater(len(node.supported_send_options), 0)
def test_supportedrecvoptions(self): def test_supportedrecvoptions(self):
logger=LogStub() logger = LogStub()
description="[Source]" description = "[Source]"
#NOTE: this could hang via ssh if we dont close filehandles properly. (which was a previous bug) # NOTE: this could hang via ssh if we dont close filehandles properly. (which was a previous bug)
node=ZfsNode("test", logger, description=description, ssh_to='localhost') node = ZfsNode("test", logger, description=description, ssh_to='localhost')
self.assertIsInstance(node.supported_recv_options, list) self.assertIsInstance(node.supported_recv_options, list)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -16,7 +16,7 @@ from zfs_autobackup.ThinnerRule import ThinnerRule
class ZfsAutobackup: class ZfsAutobackup:
"""main class""" """main class"""
VERSION = "3.1-rc1" VERSION = "3.1"
HEADER = "zfs-autobackup v{} - (c)2021 E.H.Eefting (edwin@datux.nl)".format(VERSION) HEADER = "zfs-autobackup v{} - (c)2021 E.H.Eefting (edwin@datux.nl)".format(VERSION)
def __init__(self, argv, print_arguments=True): def __init__(self, argv, print_arguments=True):
@ -45,6 +45,10 @@ class ZfsAutobackup:
help='Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate ' help='Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate '
'as snapshot-tool on source)') 'as snapshot-tool on source)')
parser.add_argument('--pre-snapshot-cmd', metavar="COMMAND", default=[], action='append',
help='Run COMMAND before snapshotting (can be used multiple times.')
parser.add_argument('--post-snapshot-cmd', metavar="COMMAND", default=[], action='append',
help='Run COMMAND after snapshotting (can be used multiple times.')
parser.add_argument('--other-snapshots', action='store_true', parser.add_argument('--other-snapshots', action='store_true',
help='Send over other snapshots as well, not just the ones created by this tool.') help='Send over other snapshots as well, not just the ones created by this tool.')
parser.add_argument('--no-snapshot', action='store_true', parser.add_argument('--no-snapshot', action='store_true',
@ -75,7 +79,7 @@ class ZfsAutobackup:
parser.add_argument('--clear-mountpoint', action='store_true', parser.add_argument('--clear-mountpoint', action='store_true',
help='Set property canmount=noauto for new datasets. (recommended, prevents mount ' help='Set property canmount=noauto for new datasets. (recommended, prevents mount '
'conflicts. same as --set-properties canmount=noauto)') 'conflicts. same as --set-properties canmount=noauto)')
parser.add_argument('--filter-properties', metavar='PROPERY,...', type=str, parser.add_argument('--filter-properties', metavar='PROPERTY,...', type=str,
help='List of properties to "filter" when receiving filesystems. (you can still restore ' help='List of properties to "filter" when receiving filesystems. (you can still restore '
'them with zfs inherit -S)') 'them with zfs inherit -S)')
parser.add_argument('--set-properties', metavar='PROPERTY=VALUE,...', type=str, parser.add_argument('--set-properties', metavar='PROPERTY=VALUE,...', type=str,
@ -99,6 +103,9 @@ class ZfsAutobackup:
parser.add_argument('--encrypt', action='store_true', parser.add_argument('--encrypt', action='store_true',
help='Encrypt data after receiving it.') help='Encrypt data after receiving it.')
parser.add_argument('--zfs-compressed', action='store_true',
help='Transfer blocks that already have zfs-compression as-is.')
parser.add_argument('--test', action='store_true', parser.add_argument('--test', action='store_true',
help='dont change anything, just show what would be done (still does all read-only ' help='dont change anything, just show what would be done (still does all read-only '
'operations)') 'operations)')
@ -166,6 +173,9 @@ class ZfsAutobackup:
if args.compress and args.ssh_source is None and args.ssh_target is None: if args.compress and args.ssh_source is None and args.ssh_target is None:
self.warning("Using compression, but transfer is local.") self.warning("Using compression, but transfer is local.")
if args.compress and args.zfs_compressed:
self.warning("Using --compress with --zfs-compressed, might be inefficient.")
def verbose(self, txt): def verbose(self, txt):
self.log.verbose(txt) self.log.verbose(txt)
@ -381,7 +391,7 @@ class ZfsAutobackup:
no_send=self.args.no_send, no_send=self.args.no_send,
destroy_incompatible=self.args.destroy_incompatible, destroy_incompatible=self.args.destroy_incompatible,
send_pipes=send_pipes, recv_pipes=recv_pipes, send_pipes=send_pipes, recv_pipes=recv_pipes,
decrypt=self.args.decrypt, encrypt=self.args.encrypt, ) decrypt=self.args.decrypt, encrypt=self.args.encrypt, zfs_compressed=self.args.zfs_compressed )
except Exception as e: except Exception as e:
fail_count = fail_count + 1 fail_count = fail_count + 1
source_dataset.error("FAILED: " + str(e)) source_dataset.error("FAILED: " + str(e))
@ -464,7 +474,7 @@ class ZfsAutobackup:
ssh_to=self.args.ssh_source, readonly=self.args.test, ssh_to=self.args.ssh_source, readonly=self.args.test,
debug_output=self.args.debug_output, description=description, thinner=source_thinner) debug_output=self.args.debug_output, description=description, thinner=source_thinner)
source_node.verbose( source_node.verbose(
"Selects all datasets that have property 'autobackup:{}=true' (or childs of datasets that have " "Selects all datasets that have property 'autobackup:{}=true' (or children of datasets that have "
"'autobackup:{}=child')".format( "'autobackup:{}=child')".format(
self.args.backup_name, self.args.backup_name)) self.args.backup_name, self.args.backup_name))
@ -500,7 +510,9 @@ class ZfsAutobackup:
if not self.args.no_snapshot: if not self.args.no_snapshot:
self.set_title("Snapshotting") self.set_title("Snapshotting")
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(),
min_changed_bytes=self.args.min_change) min_changed_bytes=self.args.min_change,
pre_snapshot_cmds=self.args.pre_snapshot_cmd,
post_snapshot_cmds=self.args.post_snapshot_cmd)
################# sync ################# sync
# if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode) # if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode)

View File

@ -503,7 +503,7 @@ class ZfsDataset:
return self.from_names(names[1:]) return self.from_names(names[1:])
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes): def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes, zfs_compressed):
"""returns a pipe with zfs send output for this snapshot """returns a pipe with zfs send output for this snapshot
resume_token: resume sending from this token. (in that case we don't resume_token: resume sending from this token. (in that case we don't
@ -530,7 +530,7 @@ class ZfsDataset:
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options: if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
cmd.append("--embed") # WRITE_EMBEDDED, more compact stream cmd.append("--embed") # WRITE_EMBEDDED, more compact stream
if "-c" in self.zfs_node.supported_send_options: if zfs_compressed and "-c" in self.zfs_node.supported_send_options:
cmd.append("--compressed") # use compressed WRITE records cmd.append("--compressed") # use compressed WRITE records
# raw? (send over encrypted data in its original encrypted form without decrypting) # raw? (send over encrypted data in its original encrypted form without decrypting)
@ -634,7 +634,7 @@ class ZfsDataset:
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress, def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
filter_properties, set_properties, ignore_recv_exit_code, resume_token, filter_properties, set_properties, ignore_recv_exit_code, resume_token,
raw, send_properties, write_embedded, send_pipes, recv_pipes): raw, send_properties, write_embedded, send_pipes, recv_pipes, zfs_compressed):
"""transfer this snapshot to target_snapshot. specify prev_snapshot for """transfer this snapshot to target_snapshot. specify prev_snapshot for
incremental transfer incremental transfer
@ -673,12 +673,13 @@ class ZfsDataset:
# do it # do it
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot, pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes) resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties, target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes) set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes)
def abort_resume(self): def abort_resume(self):
"""abort current resume state""" """abort current resume state"""
self.debug("Aborting resume")
self.zfs_node.run(["zfs", "recv", "-A", self.name]) self.zfs_node.run(["zfs", "recv", "-A", self.name])
def rollback(self): def rollback(self):
@ -873,9 +874,13 @@ class ZfsDataset:
:type target_keeps: list of ZfsDataset :type target_keeps: list of ZfsDataset
""" """
# on source: destroy all obsoletes before common. # on source: destroy all obsoletes before common. (since we cant send them anyways)
# But after common, only delete snapshots that target also doesn't want # But after common, only delete snapshots that target also doesn't want
before_common = True if common_snapshot:
before_common = True
else:
before_common = False
for source_snapshot in self.snapshots: for source_snapshot in self.snapshots:
if common_snapshot and source_snapshot.snapshot_name == common_snapshot.snapshot_name: if common_snapshot and source_snapshot.snapshot_name == common_snapshot.snapshot_name:
before_common = False before_common = False
@ -887,8 +892,8 @@ class ZfsDataset:
# on target: destroy everything thats obsolete, except common_snapshot # on target: destroy everything thats obsolete, except common_snapshot
for target_snapshot in target_dataset.snapshots: for target_snapshot in target_dataset.snapshots:
if (target_snapshot in target_obsoletes) and ( if (target_snapshot in target_obsoletes) \
not common_snapshot or target_snapshot.snapshot_name != common_snapshot.snapshot_name): and ( not common_snapshot or (target_snapshot.snapshot_name != common_snapshot.snapshot_name)):
if target_snapshot.exists: if target_snapshot.exists:
target_snapshot.destroy() target_snapshot.destroy()
@ -901,14 +906,18 @@ class ZfsDataset:
""" """
if 'receive_resume_token' in target_dataset.properties: if 'receive_resume_token' in target_dataset.properties:
resume_token = target_dataset.properties['receive_resume_token'] if start_snapshot==None:
# not valid anymore? target_dataset.verbose("Aborting resume, its obsolete.")
resume_snapshot = self.get_resume_snapshot(resume_token)
if not resume_snapshot or start_snapshot.snapshot_name != resume_snapshot.snapshot_name:
target_dataset.verbose("Cant resume, resume token no longer valid.")
target_dataset.abort_resume() target_dataset.abort_resume()
else: else:
return resume_token resume_token = target_dataset.properties['receive_resume_token']
# not valid anymore
resume_snapshot = self.get_resume_snapshot(resume_token)
if not resume_snapshot or start_snapshot.snapshot_name != resume_snapshot.snapshot_name:
target_dataset.verbose("Aborting resume, its no longer valid.")
target_dataset.abort_resume()
else:
return resume_token
def _plan_sync(self, target_dataset, also_other_snapshots): def _plan_sync(self, target_dataset, also_other_snapshots):
"""plan where to start syncing and what to sync and what to keep """plan where to start syncing and what to sync and what to keep
@ -963,7 +972,7 @@ class ZfsDataset:
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties, def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots, ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
no_send, destroy_incompatible, send_pipes, recv_pipes): no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed):
"""sync this dataset's snapshots to target_dataset, while also thinning """sync this dataset's snapshots to target_dataset, while also thinning
out old snapshots along the way. out old snapshots along the way.
@ -1046,7 +1055,7 @@ class ZfsDataset:
filter_properties=active_filter_properties, filter_properties=active_filter_properties,
set_properties=active_set_properties, set_properties=active_set_properties,
ignore_recv_exit_code=ignore_recv_exit_code, ignore_recv_exit_code=ignore_recv_exit_code,
resume_token=resume_token, write_embedded=write_embedded, raw=raw, send_properties=send_properties, send_pipes=send_pipes, recv_pipes=recv_pipes) resume_token=resume_token, write_embedded=write_embedded, raw=raw, send_properties=send_properties, send_pipes=send_pipes, recv_pipes=recv_pipes, zfs_compressed=zfs_compressed)
resume_token = None resume_token = None
@ -1075,7 +1084,7 @@ class ZfsDataset:
source_snapshot.debug("skipped (target doesn't need it)") source_snapshot.debug("skipped (target doesn't need it)")
# was it actually a resume? # was it actually a resume?
if resume_token: if resume_token:
target_dataset.debug("aborting resume, since we don't want that snapshot anymore") target_dataset.verbose("Aborting resume, we dont want that snapshot anymore.")
target_dataset.abort_resume() target_dataset.abort_resume()
resume_token = None resume_token = None

View File

@ -1,6 +1,7 @@
# python 2 compatibility # python 2 compatibility
from __future__ import print_function from __future__ import print_function
import re import re
import shlex
import subprocess import subprocess
import sys import sys
import time import time
@ -161,7 +162,7 @@ class ZfsNode(ExecuteNode):
"""determine uniq new snapshotname""" """determine uniq new snapshotname"""
return self.backup_name + "-" + time.strftime("%Y%m%d%H%M%S") return self.backup_name + "-" + time.strftime("%Y%m%d%H%M%S")
def consistent_snapshot(self, datasets, snapshot_name, min_changed_bytes): def consistent_snapshot(self, datasets, snapshot_name, min_changed_bytes, pre_snapshot_cmds=[], post_snapshot_cmds=[]):
"""create a consistent (atomic) snapshot of specified datasets, per pool. """create a consistent (atomic) snapshot of specified datasets, per pool.
""" """
@ -191,17 +192,32 @@ class ZfsNode(ExecuteNode):
self.verbose("No changes anywhere: not creating snapshots.") self.verbose("No changes anywhere: not creating snapshots.")
return return
# create consistent snapshot per pool try:
for (pool_name, snapshots) in pools.items(): for cmd in pre_snapshot_cmds:
cmd = ["zfs", "snapshot"] self.verbose("Running pre-snapshot-cmd")
self.run(cmd=shlex.split(cmd), readonly=False)
# create consistent snapshot per pool
for (pool_name, snapshots) in pools.items():
cmd = ["zfs", "snapshot"]
cmd.extend(map(lambda snapshot_: str(snapshot_), snapshots))
self.verbose("Creating snapshots {} in pool {}".format(snapshot_name, pool_name))
self.run(cmd, readonly=False)
finally:
for cmd in post_snapshot_cmds:
self.verbose("Running post-snapshot-cmd")
try:
self.run(cmd=shlex.split(cmd), readonly=False)
except Exception as e:
pass
cmd.extend(map(lambda snapshot_: str(snapshot_), snapshots))
self.verbose("Creating snapshots {} in pool {}".format(snapshot_name, pool_name))
self.run(cmd, readonly=False)
def selected_datasets(self, exclude_received, exclude_paths): def selected_datasets(self, exclude_received, exclude_paths):
"""determine filesystems that should be backupped by looking at the special autobackup-property, systemwide """determine filesystems that should be backed up by looking at the special autobackup-property, systemwide
returns: list of ZfsDataset returns: list of ZfsDataset
""" """