Compare commits
37 Commits
v3.1-beta2
...
v3.1-beta3
| Author | SHA1 | Date | |
|---|---|---|---|
| a16a038f0e | |||
| fc0da9d380 | |||
| 31be12c0bf | |||
| 176f04b302 | |||
| 7696d8c16d | |||
| 190a73ec10 | |||
| 2bf015e127 | |||
| 671eda7386 | |||
| 3d4b26cec3 | |||
| c0ea311e18 | |||
| b7b2723b2e | |||
| ec1d3ff93e | |||
| 352d5e6094 | |||
| 488ff6f551 | |||
| f52b8bbf58 | |||
| e47d461999 | |||
| a920744b1e | |||
| 63f423a201 | |||
| db6523f3c0 | |||
| 6b172dce2d | |||
| 85d493469d | |||
| bef3be4955 | |||
| f9719ba87e | |||
| 4b97f789df | |||
| ed7cd41ad7 | |||
| 62e19d97c2 | |||
| 594a2664c4 | |||
| d8fbc96be6 | |||
| 61bb590112 | |||
| 86ea5e49f4 | |||
| 01642365c7 | |||
| 4910b1dfb5 | |||
| 966df73d2f | |||
| 69ed827c0d | |||
| e79f6ac157 | |||
| 59efd070a1 | |||
| 80c1bdad1c |
2
.github/workflows/regression.yml
vendored
2
.github/workflows/regression.yml
vendored
@ -64,7 +64,7 @@ jobs:
|
|||||||
python-version: '2.x'
|
python-version: '2.x'
|
||||||
|
|
||||||
- name: Prepare
|
- name: Prepare
|
||||||
run: sudo apt update && sudo apt install zfsutils-linux python-setuptools && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls
|
run: sudo apt update && sudo apt install zfsutils-linux python-setuptools && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls colorama
|
||||||
|
|
||||||
- name: Regression test
|
- name: Regression test
|
||||||
run: sudo -E ./tests/run_tests
|
run: sudo -E ./tests/run_tests
|
||||||
|
|||||||
1
.gitignore
vendored
1
.gitignore
vendored
@ -11,3 +11,4 @@ __pycache__
|
|||||||
python2.env
|
python2.env
|
||||||
venv
|
venv
|
||||||
.idea
|
.idea
|
||||||
|
password.sh
|
||||||
|
|||||||
166
README.md
166
README.md
@ -45,11 +45,14 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
|||||||
* Gracefully handles destroyed datasets on source.
|
* Gracefully handles destroyed datasets on source.
|
||||||
* Easy installation:
|
* Easy installation:
|
||||||
* Just install zfs-autobackup via pip, or download it manually.
|
* Just install zfs-autobackup via pip, or download it manually.
|
||||||
|
* Only needs to be installed on one side.
|
||||||
* Written in python and uses zfs-commands, no 3rd party dependency's or libraries needed.
|
* Written in python and uses zfs-commands, no 3rd party dependency's or libraries needed.
|
||||||
* No separate config files or properties. Just one zfs-autobackup command you can copy/paste in your backup script.
|
* No separate config files or properties. Just one zfs-autobackup command you can copy/paste in your backup script.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
|
You only need to install zfs-autobackup on the side that initiates the backup. The other side doesnt need any extra configration.
|
||||||
|
|
||||||
### Using pip
|
### Using pip
|
||||||
|
|
||||||
The recommended way on most servers is to use [pip](https://pypi.org/project/zfs-autobackup/):
|
The recommended way on most servers is to use [pip](https://pypi.org/project/zfs-autobackup/):
|
||||||
@ -367,6 +370,29 @@ zfs-autobackup will re-evaluate this on every run: As soon as a snapshot doesn't
|
|||||||
|
|
||||||
Snapshots on the source that still have to be send to the target wont be destroyed off course. (If the target still wants them, according to the target schedule)
|
Snapshots on the source that still have to be send to the target wont be destroyed off course. (If the target still wants them, according to the target schedule)
|
||||||
|
|
||||||
|
## How zfs-autobackup handles encryption
|
||||||
|
|
||||||
|
In normal operation datasets are transferred unaltered:
|
||||||
|
|
||||||
|
* Source datasets that are encrypted will be send over as such and stay encrypted at the target side. (In ZFS this is called raw-mode) You dont need keys at the target side if you dont want to access the data.
|
||||||
|
* Source datasets that are plain will stay that way on the target. (Even if the specified target-path IS encrypted.)
|
||||||
|
|
||||||
|
Basically you dont have to do anything or worry about anything.
|
||||||
|
|
||||||
|
### Decrypting/encrypting
|
||||||
|
|
||||||
|
Things get different if you want to change the encryption-state of a dataset during transfer:
|
||||||
|
|
||||||
|
* If you want to decrypt encrypted datasets before sending them, you should use the `--decrypt` option. Datasets will then be stored plain at the target.
|
||||||
|
* If you want to encrypt plain datasets when they are received, you should use the `--encrypt` option. Datasets will then be stored encrypted at the target. (Datasets that are already encrypted will still be sent over unaltered in raw-mode.)
|
||||||
|
* If you also want re-encrypt encrypted datasets with the target-side encryption you can use both options.
|
||||||
|
|
||||||
|
Note 1: The --encrypt option will rely on inheriting encryption parameters from the parent datasets on the target side. You are responsible for setting those up and loading the keys. So --encrypt is no guarantee for encryption: If you dont set it up, it cant encrypt.
|
||||||
|
|
||||||
|
Note 2: Decide what you want at an early stage: If you change the --encrypt or --decrypt parameter after the inital sync you might get weird and wonderfull errors. (nothing dangerous)
|
||||||
|
|
||||||
|
I'll add some tips when the issues start to get in on github. :)
|
||||||
|
|
||||||
## Tips
|
## Tips
|
||||||
|
|
||||||
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
|
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
|
||||||
@ -436,105 +462,67 @@ Look in man ssh_config for many more options.
|
|||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
Here you find all the options:
|
(NOTE: Quite a lot has changed since the current stable version 3.0. The page your are viewing is for upcoming version 3.1 which is still in beta.)
|
||||||
|
|
||||||
```console
|
```console
|
||||||
[root@server ~]# zfs-autobackup --help
|
usage: zfs-autobackup [-h] [--ssh-config CONFIG-FILE] [--ssh-source USER@HOST] [--ssh-target USER@HOST] [--keep-source SCHEDULE] [--keep-target SCHEDULE] [--other-snapshots] [--no-snapshot] [--no-send]
|
||||||
usage: zfs-autobackup [-h] [--ssh-config SSH_CONFIG] [--ssh-source SSH_SOURCE]
|
[--no-thinning] [--no-holds] [--min-change BYTES] [--allow-empty] [--ignore-replicated] [--strip-path N] [--clear-refreservation] [--clear-mountpoint] [--filter-properties PROPERY,...]
|
||||||
[--ssh-target SSH_TARGET] [--keep-source KEEP_SOURCE]
|
[--set-properties PROPERTY=VALUE,...] [--rollback] [--destroy-incompatible] [--destroy-missing SCHEDULE] [--ignore-transfer-errors] [--decrypt] [--encrypt] [--test] [--verbose] [--debug]
|
||||||
[--keep-target KEEP_TARGET] [--other-snapshots]
|
[--debug-output] [--progress] [--send-pipe COMMAND] [--recv-pipe COMMAND]
|
||||||
[--no-snapshot] [--no-send] [--min-change MIN_CHANGE]
|
|
||||||
[--allow-empty] [--ignore-replicated] [--no-holds]
|
|
||||||
[--strip-path STRIP_PATH] [--clear-refreservation]
|
|
||||||
[--clear-mountpoint]
|
|
||||||
[--filter-properties FILTER_PROPERTIES]
|
|
||||||
[--set-properties SET_PROPERTIES] [--rollback]
|
|
||||||
[--destroy-incompatible] [--ignore-transfer-errors]
|
|
||||||
[--raw] [--test] [--verbose] [--debug] [--debug-output]
|
|
||||||
[--progress]
|
|
||||||
backup-name [target-path]
|
backup-name [target-path]
|
||||||
|
|
||||||
zfs-autobackup v3.0-rc12 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
zfs-autobackup v3.1-beta3 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
||||||
|
|
||||||
positional arguments:
|
positional arguments:
|
||||||
backup-name Name of the backup (you should set the zfs property
|
backup-name Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup
|
||||||
"autobackup:backup-name" to true on filesystems you
|
target-path Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate as snapshot-tool on source)
|
||||||
want to backup
|
|
||||||
target-path Target ZFS filesystem (optional: if not specified,
|
|
||||||
zfs-autobackup will only operate as snapshot-tool on
|
|
||||||
source)
|
|
||||||
|
|
||||||
optional arguments:
|
optional arguments:
|
||||||
-h, --help show this help message and exit
|
-h, --help show this help message and exit
|
||||||
--ssh-config SSH_CONFIG
|
--ssh-config CONFIG-FILE
|
||||||
Custom ssh client config
|
Custom ssh client config
|
||||||
--ssh-source SSH_SOURCE
|
--ssh-source USER@HOST
|
||||||
Source host to get backup from. (user@hostname)
|
Source host to get backup from.
|
||||||
Default None.
|
--ssh-target USER@HOST
|
||||||
--ssh-target SSH_TARGET
|
Target host to push backup to.
|
||||||
Target host to push backup to. (user@hostname) Default
|
--keep-source SCHEDULE
|
||||||
None.
|
Thinning schedule for old source snapshots. Default: 10,1d1w,1w1m,1m1y
|
||||||
--keep-source KEEP_SOURCE
|
--keep-target SCHEDULE
|
||||||
Thinning schedule for old source snapshots. Default:
|
Thinning schedule for old target snapshots. Default: 10,1d1w,1w1m,1m1y
|
||||||
10,1d1w,1w1m,1m1y
|
--other-snapshots Send over other snapshots as well, not just the ones created by this tool.
|
||||||
--keep-target KEEP_TARGET
|
--no-snapshot Don't create new snapshots (useful for finishing uncompleted backups, or cleanups)
|
||||||
Thinning schedule for old target snapshots. Default:
|
--no-send Don't send snapshots (useful for cleanups, or if you want a serperate send-cronjob)
|
||||||
10,1d1w,1w1m,1m1y
|
--no-thinning Do not destroy any snapshots.
|
||||||
--other-snapshots Send over other snapshots as well, not just the ones
|
--no-holds Don't hold snapshots. (Faster. Allows you to destroy common snapshot.)
|
||||||
created by this tool.
|
--min-change BYTES Number of bytes written after which we consider a dataset changed (default 1)
|
||||||
--no-snapshot Don't create new snapshots (useful for finishing
|
--allow-empty If nothing has changed, still create empty snapshots. (same as --min-change=0)
|
||||||
uncompleted backups, or cleanups)
|
--ignore-replicated Ignore datasets that seem to be replicated some other way. (No changes since lastest snapshot. Useful for proxmox HA replication)
|
||||||
--no-send Don't send snapshots (useful for cleanups, or if you
|
--strip-path N Number of directories to strip from target path (use 1 when cloning zones between 2 SmartOS machines)
|
||||||
want a serperate send-cronjob)
|
|
||||||
--min-change MIN_CHANGE
|
|
||||||
Number of bytes written after which we consider a
|
|
||||||
dataset changed (default 1)
|
|
||||||
--allow-empty If nothing has changed, still create empty snapshots.
|
|
||||||
(same as --min-change=0)
|
|
||||||
--ignore-replicated Ignore datasets that seem to be replicated some other
|
|
||||||
way. (No changes since lastest snapshot. Useful for
|
|
||||||
proxmox HA replication)
|
|
||||||
--no-holds Don't lock snapshots on the source. (Useful to allow
|
|
||||||
proxmox HA replication to switches nodes)
|
|
||||||
--strip-path STRIP_PATH
|
|
||||||
Number of directories to strip from target path (use 1
|
|
||||||
when cloning zones between 2 SmartOS machines)
|
|
||||||
--clear-refreservation
|
--clear-refreservation
|
||||||
Filter "refreservation" property. (recommended, safes
|
Filter "refreservation" property. (recommended, safes space. same as --filter-properties refreservation)
|
||||||
space. same as --filter-properties refreservation)
|
--clear-mountpoint Set property canmount=noauto for new datasets. (recommended, prevents mount conflicts. same as --set-properties canmount=noauto)
|
||||||
--clear-mountpoint Set property canmount=noauto for new datasets.
|
--filter-properties PROPERY,...
|
||||||
(recommended, prevents mount conflicts. same as --set-
|
List of properties to "filter" when receiving filesystems. (you can still restore them with zfs inherit -S)
|
||||||
properties canmount=noauto)
|
--set-properties PROPERTY=VALUE,...
|
||||||
--filter-properties FILTER_PROPERTIES
|
List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S)
|
||||||
List of properties to "filter" when receiving
|
--rollback Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on)
|
||||||
filesystems. (you can still restore them with zfs
|
|
||||||
inherit -S)
|
|
||||||
--set-properties SET_PROPERTIES
|
|
||||||
List of propererties to override when receiving
|
|
||||||
filesystems. (you can still restore them with zfs
|
|
||||||
inherit -S)
|
|
||||||
--rollback Rollback changes to the latest target snapshot before
|
|
||||||
starting. (normally you can prevent changes by setting
|
|
||||||
the readonly property on the target_path to on)
|
|
||||||
--destroy-incompatible
|
--destroy-incompatible
|
||||||
Destroy incompatible snapshots on target. Use with
|
Destroy incompatible snapshots on target. Use with care! (implies --rollback)
|
||||||
care! (implies --rollback)
|
--destroy-missing SCHEDULE
|
||||||
|
Destroy datasets on target that are missing on the source. Specify the time since the last snapshot, e.g: --destroy-missing 30d
|
||||||
--ignore-transfer-errors
|
--ignore-transfer-errors
|
||||||
Ignore transfer errors (still checks if received
|
Ignore transfer errors (still checks if received filesystem exists. useful for acltype errors)
|
||||||
filesystem exists. useful for acltype errors)
|
--decrypt Decrypt data before sending it over.
|
||||||
--raw For encrypted datasets, send data exactly as it exists
|
--encrypt Encrypt data after receiving it.
|
||||||
on disk.
|
--test dont change anything, just show what would be done (still does all read-only operations)
|
||||||
--test dont change anything, just show what would be done
|
|
||||||
(still does all read-only operations)
|
|
||||||
--verbose verbose output
|
--verbose verbose output
|
||||||
--debug Show zfs commands that are executed, stops after an
|
--debug Show zfs commands that are executed, stops after an exception.
|
||||||
exception.
|
|
||||||
--debug-output Show zfs commands and their output/exit codes. (noisy)
|
--debug-output Show zfs commands and their output/exit codes. (noisy)
|
||||||
--progress show zfs progress output (to stderr). Enabled by
|
--progress show zfs progress output. Enabled automaticly on ttys. (use --no-progress to disable)
|
||||||
default on ttys.
|
--send-pipe COMMAND pipe zfs send output through COMMAND
|
||||||
|
--recv-pipe COMMAND pipe zfs recv input through COMMAND
|
||||||
|
|
||||||
When a filesystem fails, zfs_backup will continue and report the number of
|
Full manual at: https://github.com/psy0rz/zfs_autobackup
|
||||||
failures at that end. Also the exit code will indicate the number of failures.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
@ -551,15 +539,19 @@ This usually means you've created a new snapshot on the target side during a bac
|
|||||||
|
|
||||||
This means files have been modified on the target side somehow.
|
This means files have been modified on the target side somehow.
|
||||||
|
|
||||||
You can use --rollback to automaticly rollback such changes.
|
You can use --rollback to automaticly rollback such changes. Also try destroying the target dataset and using --clear-mountpoint on the next run. This way it wont get mounted.
|
||||||
|
|
||||||
Note: This usually happens if the source-side has a non-standard mountpoint for a dataset, and you're using --clear-mountpoint. In this case the target side creates a mountpoint in the parent dataset, causing the change.
|
|
||||||
|
|
||||||
### It says 'internal error: Invalid argument'
|
### It says 'internal error: Invalid argument'
|
||||||
|
|
||||||
In some cases (Linux -> FreeBSD) this means certain properties are not fully supported on the target system.
|
In some cases (Linux -> FreeBSD) this means certain properties are not fully supported on the target system.
|
||||||
|
|
||||||
Try using something like: --filter-properties xattr
|
Try using something like: --filter-properties xattr or --ignore-transfer-errors.
|
||||||
|
|
||||||
|
### zfs receive fails, but snapshot seems to be received successful.
|
||||||
|
|
||||||
|
This happens if you transfer between different Operating systems/zfs versions or feature sets.
|
||||||
|
|
||||||
|
Try using the --ignore-transfer-errors option. This will ignore the error. It will still check if the snapshot is actually received correctly.
|
||||||
|
|
||||||
## Restore example
|
## Restore example
|
||||||
|
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
colorama
|
colorama
|
||||||
argparse
|
argparse
|
||||||
coverage==4.5.4
|
coverage
|
||||||
python-coveralls
|
python-coveralls
|
||||||
unittest2
|
unittest2
|
||||||
mock
|
mock
|
||||||
|
|||||||
6
tests/autoruntests
Executable file
6
tests/autoruntests
Executable file
@ -0,0 +1,6 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
#NOTE: run from top directory
|
||||||
|
|
||||||
|
find tests/*.py zfs_autobackup/*.py| entr -r ./tests/run_tests $@
|
||||||
|
|
||||||
@ -58,7 +58,8 @@ def redirect_stderr(target):
|
|||||||
|
|
||||||
def shelltest(cmd):
|
def shelltest(cmd):
|
||||||
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
|
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
|
||||||
ret=(subprocess.check_output(cmd , shell=True).decode('utf-8'))
|
|
||||||
|
ret=(subprocess.check_output("SUDO_ASKPASS=./password.sh sudo -A "+cmd , shell=True).decode('utf-8'))
|
||||||
print("######### result of: {}".format(cmd))
|
print("######### result of: {}".format(cmd))
|
||||||
print(ret)
|
print(ret)
|
||||||
print("#########")
|
print("#########")
|
||||||
|
|||||||
@ -19,7 +19,7 @@ if ! [ -e /root/.ssh/id_rsa ]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
|
|
||||||
coverage run --source zfs_autobackup -m unittest discover -vvvvf $SCRIPTDIR $@ 2>&1
|
coverage run --branch --source zfs_autobackup -m unittest discover -vvvvf $SCRIPTDIR $@ 2>&1
|
||||||
EXIT=$?
|
EXIT=$?
|
||||||
|
|
||||||
echo
|
echo
|
||||||
|
|||||||
111
tests/test_cmdpipe.py
Normal file
111
tests/test_cmdpipe.py
Normal file
@ -0,0 +1,111 @@
|
|||||||
|
from basetest import *
|
||||||
|
from zfs_autobackup.CmdPipe import CmdPipe
|
||||||
|
|
||||||
|
|
||||||
|
class TestCmdPipe(unittest2.TestCase):
|
||||||
|
|
||||||
|
def test_single(self):
|
||||||
|
"""single process stdout and stderr"""
|
||||||
|
p=CmdPipe(readonly=False, inp=None)
|
||||||
|
err=[]
|
||||||
|
out=[]
|
||||||
|
p.add(["ls", "-d", "/", "/", "/nonexistent"], stderr_handler=lambda line: err.append(line))
|
||||||
|
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||||
|
|
||||||
|
self.assertEqual(err, ["ls: cannot access '/nonexistent': No such file or directory"])
|
||||||
|
self.assertEqual(out, ["/","/"])
|
||||||
|
self.assertTrue(executed)
|
||||||
|
self.assertEqual(p.items[0]['process'].returncode,2)
|
||||||
|
|
||||||
|
def test_input(self):
|
||||||
|
"""test stdinput"""
|
||||||
|
p=CmdPipe(readonly=False, inp="test")
|
||||||
|
err=[]
|
||||||
|
out=[]
|
||||||
|
p.add(["echo", "test"], stderr_handler=lambda line: err.append(line))
|
||||||
|
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||||
|
|
||||||
|
self.assertEqual(err, [])
|
||||||
|
self.assertEqual(out, ["test"])
|
||||||
|
self.assertTrue(executed)
|
||||||
|
self.assertEqual(p.items[0]['process'].returncode,0)
|
||||||
|
|
||||||
|
def test_pipe(self):
|
||||||
|
"""test piped"""
|
||||||
|
p=CmdPipe(readonly=False)
|
||||||
|
err1=[]
|
||||||
|
err2=[]
|
||||||
|
err3=[]
|
||||||
|
out=[]
|
||||||
|
p.add(["echo", "test"], stderr_handler=lambda line: err1.append(line))
|
||||||
|
p.add(["tr", "e", "E"], stderr_handler=lambda line: err2.append(line))
|
||||||
|
p.add(["tr", "t", "T"], stderr_handler=lambda line: err3.append(line))
|
||||||
|
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||||
|
|
||||||
|
self.assertEqual(err1, [])
|
||||||
|
self.assertEqual(err2, [])
|
||||||
|
self.assertEqual(err3, [])
|
||||||
|
self.assertEqual(out, ["TEsT"])
|
||||||
|
self.assertTrue(executed)
|
||||||
|
self.assertEqual(p.items[0]['process'].returncode,0)
|
||||||
|
self.assertEqual(p.items[1]['process'].returncode,0)
|
||||||
|
self.assertEqual(p.items[2]['process'].returncode,0)
|
||||||
|
|
||||||
|
#test str representation as well
|
||||||
|
self.assertEqual(str(p), "(echo test) | (tr e E) | (tr t T)")
|
||||||
|
|
||||||
|
def test_pipeerrors(self):
|
||||||
|
"""test piped stderrs """
|
||||||
|
p=CmdPipe(readonly=False)
|
||||||
|
err1=[]
|
||||||
|
err2=[]
|
||||||
|
err3=[]
|
||||||
|
out=[]
|
||||||
|
p.add(["ls", "/nonexistent1"], stderr_handler=lambda line: err1.append(line))
|
||||||
|
p.add(["ls", "/nonexistent2"], stderr_handler=lambda line: err2.append(line))
|
||||||
|
p.add(["ls", "/nonexistent3"], stderr_handler=lambda line: err3.append(line))
|
||||||
|
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||||
|
|
||||||
|
self.assertEqual(err1, ["ls: cannot access '/nonexistent1': No such file or directory"])
|
||||||
|
self.assertEqual(err2, ["ls: cannot access '/nonexistent2': No such file or directory"])
|
||||||
|
self.assertEqual(err3, ["ls: cannot access '/nonexistent3': No such file or directory"])
|
||||||
|
self.assertEqual(out, [])
|
||||||
|
self.assertTrue(executed)
|
||||||
|
self.assertEqual(p.items[0]['process'].returncode,2)
|
||||||
|
self.assertEqual(p.items[1]['process'].returncode,2)
|
||||||
|
self.assertEqual(p.items[2]['process'].returncode,2)
|
||||||
|
|
||||||
|
def test_readonly_execute(self):
|
||||||
|
"""everything readonly, just should execute"""
|
||||||
|
|
||||||
|
p=CmdPipe(readonly=True)
|
||||||
|
err1=[]
|
||||||
|
err2=[]
|
||||||
|
out=[]
|
||||||
|
p.add(["echo", "test1"], stderr_handler=lambda line: err1.append(line), readonly=True)
|
||||||
|
p.add(["echo", "test2"], stderr_handler=lambda line: err2.append(line), readonly=True)
|
||||||
|
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||||
|
|
||||||
|
self.assertEqual(err1, [])
|
||||||
|
self.assertEqual(err2, [])
|
||||||
|
self.assertEqual(out, ["test2"])
|
||||||
|
self.assertTrue(executed)
|
||||||
|
self.assertEqual(p.items[0]['process'].returncode,0)
|
||||||
|
self.assertEqual(p.items[1]['process'].returncode,0)
|
||||||
|
|
||||||
|
def test_readonly_skip(self):
|
||||||
|
"""one command not readonly, skip"""
|
||||||
|
|
||||||
|
p=CmdPipe(readonly=True)
|
||||||
|
err1=[]
|
||||||
|
err2=[]
|
||||||
|
out=[]
|
||||||
|
p.add(["echo", "test1"], stderr_handler=lambda line: err1.append(line), readonly=False)
|
||||||
|
p.add(["echo", "test2"], stderr_handler=lambda line: err2.append(line), readonly=True)
|
||||||
|
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||||
|
|
||||||
|
self.assertEqual(err1, [])
|
||||||
|
self.assertEqual(err2, [])
|
||||||
|
self.assertEqual(out, [])
|
||||||
|
self.assertFalse(executed)
|
||||||
|
|
||||||
@ -14,16 +14,16 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
|
|
||||||
#initial backup
|
#initial backup
|
||||||
with patch('time.strftime', return_value="10101111000000"): #1000 years in past
|
with patch('time.strftime', return_value="10101111000000"): #1000 years in past
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"): #far in past
|
with patch('time.strftime', return_value="20101111000000"): #far in past
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
with self.subTest("Should do nothing yet"):
|
with self.subTest("Should do nothing yet"):
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
self.assertNotIn(": Destroy missing", buf.getvalue())
|
self.assertNotIn(": Destroy missing", buf.getvalue())
|
||||||
@ -36,7 +36,7 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
|
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf), redirect_stderr(buf):
|
with redirect_stdout(buf), redirect_stderr(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
#should have done the snapshot cleanup for destoy missing:
|
#should have done the snapshot cleanup for destoy missing:
|
||||||
@ -54,7 +54,7 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
#100y: lastest should not be old enough, while second to latest snapshot IS old enough:
|
#100y: lastest should not be old enough, while second to latest snapshot IS old enough:
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 100y".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 100y".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
self.assertIn(": Waiting for deadline", buf.getvalue())
|
self.assertIn(": Waiting for deadline", buf.getvalue())
|
||||||
@ -62,7 +62,7 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
#past deadline, destroy
|
#past deadline, destroy
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 1y".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 1y".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
self.assertIn("sub: Destroying", buf.getvalue())
|
self.assertIn("sub: Destroying", buf.getvalue())
|
||||||
@ -75,7 +75,7 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
|
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
@ -90,7 +90,7 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
|
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf), redirect_stderr(buf):
|
with redirect_stdout(buf), redirect_stderr(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
#now tries to destroy our own last snapshot (before the final destroy of the dataset)
|
#now tries to destroy our own last snapshot (before the final destroy of the dataset)
|
||||||
@ -105,7 +105,7 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
|
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf), redirect_stderr(buf):
|
with redirect_stdout(buf), redirect_stderr(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
#should have done the snapshot cleanup for destoy missing:
|
#should have done the snapshot cleanup for destoy missing:
|
||||||
@ -113,7 +113,7 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
|
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf), redirect_stderr(buf):
|
with redirect_stdout(buf), redirect_stderr(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
#on second run it sees the dangling ex-parent but doesnt know what to do with it (since it has no own snapshot)
|
#on second run it sees the dangling ex-parent but doesnt know what to do with it (since it has no own snapshot)
|
||||||
|
|||||||
192
tests/test_encryption.py
Normal file
192
tests/test_encryption.py
Normal file
@ -0,0 +1,192 @@
|
|||||||
|
from zfs_autobackup.CmdPipe import CmdPipe
|
||||||
|
from basetest import *
|
||||||
|
import time
|
||||||
|
|
||||||
|
# We have to do a LOT to properly test encryption/decryption/raw transfers
|
||||||
|
#
|
||||||
|
# For every scenario we need at least:
|
||||||
|
# - plain source dataset
|
||||||
|
# - encrypted source dataset
|
||||||
|
# - plain target path
|
||||||
|
# - encrypted target path
|
||||||
|
# - do a full transfer
|
||||||
|
# - do a incremental transfer
|
||||||
|
|
||||||
|
# Scenarios:
|
||||||
|
# - Raw transfer
|
||||||
|
# - Decryption transfer (--decrypt)
|
||||||
|
# - Encryption transfer (--encrypt)
|
||||||
|
# - Re-encryption transfer (--decrypt --encrypt)
|
||||||
|
|
||||||
|
class TestZfsEncryption(unittest2.TestCase):
|
||||||
|
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
prepare_zpools()
|
||||||
|
|
||||||
|
try:
|
||||||
|
shelltest("zfs get encryption test_source1")
|
||||||
|
except:
|
||||||
|
self.skipTest("Encryption not supported on this ZFS version.")
|
||||||
|
|
||||||
|
def prepare_encrypted_dataset(self, key, path, unload_key=False):
|
||||||
|
|
||||||
|
# create encrypted source dataset
|
||||||
|
shelltest("echo {} > /tmp/zfstest.key".format(key))
|
||||||
|
shelltest("zfs create -o keylocation=file:///tmp/zfstest.key -o keyformat=passphrase -o encryption=on {}".format(path))
|
||||||
|
|
||||||
|
if unload_key:
|
||||||
|
shelltest("zfs unmount {}".format(path))
|
||||||
|
shelltest("zfs unload-key {}".format(path))
|
||||||
|
|
||||||
|
# r=shelltest("dd if=/dev/zero of=/test_source1/fs1/enc1/data.txt bs=200000 count=1")
|
||||||
|
|
||||||
|
def test_raw(self):
|
||||||
|
"""send encrypted data unaltered (standard operation)"""
|
||||||
|
|
||||||
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsourcekeyless", unload_key=True) # raw mode shouldn't need a key
|
||||||
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty".split(" ")).run())
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty".split(" ")).run())
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot".split(" ")).run())
|
||||||
|
|
||||||
|
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_target1 encryptionroot - -
|
||||||
|
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1 encryptionroot - -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/encryptedsourcekeyless encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsourcekeyless -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot - -
|
||||||
|
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot - -
|
||||||
|
test_target1/test_source1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1/encryptedsource encryptionroot test_target1/test_source1/fs1/encryptedsource -
|
||||||
|
test_target1/test_source1/fs1/encryptedsourcekeyless encryptionroot test_target1/test_source1/fs1/encryptedsourcekeyless -
|
||||||
|
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||||
|
test_target1/test_source2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||||
|
""")
|
||||||
|
|
||||||
|
def test_decrypt(self):
|
||||||
|
"""decrypt data and store unencrypted (--decrypt)"""
|
||||||
|
|
||||||
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty".split(" ")).run())
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty".split(" ")).run())
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot".split(" ")).run())
|
||||||
|
|
||||||
|
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||||
|
self.assertEqual(r, """
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_target1 encryptionroot - -
|
||||||
|
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1 encryptionroot - -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot - -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot - -
|
||||||
|
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot - -
|
||||||
|
test_target1/test_source1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1/encryptedsource encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||||
|
test_target1/test_source2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||||
|
""")
|
||||||
|
|
||||||
|
def test_encrypt(self):
|
||||||
|
"""send normal data set and store encrypted on the other side (--encrypt) issue #60 """
|
||||||
|
|
||||||
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty".split(" ")).run())
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty".split(" ")).run())
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot".split(" ")).run())
|
||||||
|
|
||||||
|
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||||
|
self.assertEqual(r, """
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_target1 encryptionroot - -
|
||||||
|
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/test_source1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1/encryptedsource encryptionroot test_target1/test_source1/fs1/encryptedsource -
|
||||||
|
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||||
|
test_target1/test_source2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||||
|
""")
|
||||||
|
|
||||||
|
def test_reencrypt(self):
|
||||||
|
"""reencrypt data (--decrypt --encrypt) """
|
||||||
|
|
||||||
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup(
|
||||||
|
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty".split(" ")).run())
|
||||||
|
self.assertFalse(ZfsAutobackup(
|
||||||
|
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot".split(
|
||||||
|
" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup(
|
||||||
|
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty".split(" ")).run())
|
||||||
|
self.assertFalse(ZfsAutobackup(
|
||||||
|
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot".split(
|
||||||
|
" ")).run())
|
||||||
|
|
||||||
|
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||||
|
self.assertEqual(r, """
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_target1 encryptionroot - -
|
||||||
|
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot test_target1/encryptedtarget -
|
||||||
|
test_target1/test_source1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1/encryptedsource encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||||
|
test_target1/test_source2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||||
|
""")
|
||||||
|
|
||||||
@ -64,7 +64,7 @@ class TestExecuteNode(unittest2.TestCase):
|
|||||||
def test_readonly(self):
|
def test_readonly(self):
|
||||||
node=ExecuteNode(debug_output=True, readonly=True)
|
node=ExecuteNode(debug_output=True, readonly=True)
|
||||||
|
|
||||||
self.assertEqual(node.run(["echo","test"], readonly=False), None)
|
self.assertEqual(node.run(["echo","test"], readonly=False), [])
|
||||||
self.assertEqual(node.run(["echo","test"], readonly=True), ["test"])
|
self.assertEqual(node.run(["echo","test"], readonly=True), ["test"])
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -20,7 +20,7 @@ class TestExternalFailures(unittest2.TestCase):
|
|||||||
r = shelltest("dd if=/dev/zero of=/test_target1/waste bs=250M count=1")
|
r = shelltest("dd if=/dev/zero of=/test_target1/waste bs=250M count=1")
|
||||||
|
|
||||||
# should fail and leave resume token (if supported)
|
# should fail and leave resume token (if supported)
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
# free up space
|
# free up space
|
||||||
r = shelltest("rm /test_target1/waste")
|
r = shelltest("rm /test_target1/waste")
|
||||||
@ -38,7 +38,7 @@ class TestExternalFailures(unittest2.TestCase):
|
|||||||
# --test should resume and succeed
|
# --test should resume and succeed
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
@ -52,7 +52,7 @@ class TestExternalFailures(unittest2.TestCase):
|
|||||||
# should resume and succeed
|
# should resume and succeed
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
@ -82,7 +82,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
# initial backup
|
# initial backup
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
# incremental backup leaves resume token
|
# incremental backup leaves resume token
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
@ -91,7 +91,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
# --test should resume and succeed
|
# --test should resume and succeed
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
@ -105,7 +105,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
# should resume and succeed
|
# should resume and succeed
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
@ -149,11 +149,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
# --test try again, should abort old resume
|
# --test try again, should abort old resume
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
# try again, should abort old resume
|
# try again, should abort old resume
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
self.assertMultiLineEqual(r, """
|
self.assertMultiLineEqual(r, """
|
||||||
@ -177,7 +177,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
# initial backup
|
# initial backup
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
# icremental backup, leaves resume token
|
# icremental backup, leaves resume token
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
@ -188,11 +188,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
# --test try again, should abort old resume
|
# --test try again, should abort old resume
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
# try again, should abort old resume
|
# try again, should abort old resume
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
self.assertMultiLineEqual(r, """
|
self.assertMultiLineEqual(r, """
|
||||||
@ -216,7 +216,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
# generate resume
|
# generate resume
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
@ -227,7 +227,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
# incremental, doesnt want previous anymore
|
# incremental, doesnt want previous anymore
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1 --verbose --keep-target=0 --debug --allow-empty".split(" ")).run())
|
"test test_target1 --no-progress --verbose --keep-target=0 --debug --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
@ -250,14 +250,14 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
def test_missing_common(self):
|
def test_missing_common(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
# remove common snapshot and leave nothing
|
# remove common snapshot and leave nothing
|
||||||
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
|
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
|
||||||
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
#UPDATE: offcourse the one thing that wasn't tested had a bug :( (in ExecuteNode.run()).
|
#UPDATE: offcourse the one thing that wasn't tested had a bug :( (in ExecuteNode.run()).
|
||||||
def test_ignoretransfererrors(self):
|
def test_ignoretransfererrors(self):
|
||||||
|
|||||||
52
tests/test_log.py
Normal file
52
tests/test_log.py
Normal file
@ -0,0 +1,52 @@
|
|||||||
|
import zfs_autobackup.LogConsole
|
||||||
|
from basetest import *
|
||||||
|
|
||||||
|
|
||||||
|
class TestLog(unittest2.TestCase):
|
||||||
|
|
||||||
|
def test_colored(self):
|
||||||
|
"""test with color output"""
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
l=LogConsole(show_verbose=False, show_debug=False, color=True)
|
||||||
|
l.verbose("verbose")
|
||||||
|
l.debug("debug")
|
||||||
|
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
l=LogConsole(show_verbose=True, show_debug=True, color=True)
|
||||||
|
l.verbose("verbose")
|
||||||
|
l.debug("debug")
|
||||||
|
|
||||||
|
with redirect_stderr(buf):
|
||||||
|
l=LogConsole(show_verbose=False, show_debug=False, color=True)
|
||||||
|
l.error("error")
|
||||||
|
|
||||||
|
print(list(buf.getvalue()))
|
||||||
|
self.assertEqual(list(buf.getvalue()), ['\x1b', '[', '2', '2', 'm', ' ', ' ', 'v', 'e', 'r', 'b', 'o', 's', 'e', '\x1b', '[', '0', 'm', '\n', '\x1b', '[', '3', '2', 'm', '#', ' ', 'd', 'e', 'b', 'u', 'g', '\x1b', '[', '0', 'm', '\n', '\x1b', '[', '3', '1', 'm', '\x1b', '[', '1', 'm', '!', ' ', 'e', 'r', 'r', 'o', 'r', '\x1b', '[', '0', 'm', '\n'])
|
||||||
|
|
||||||
|
def test_nocolor(self):
|
||||||
|
"""test without color output"""
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
l=LogConsole(show_verbose=False, show_debug=False, color=False)
|
||||||
|
l.verbose("verbose")
|
||||||
|
l.debug("debug")
|
||||||
|
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
l=LogConsole(show_verbose=True, show_debug=True, color=False)
|
||||||
|
l.verbose("verbose")
|
||||||
|
l.debug("debug")
|
||||||
|
|
||||||
|
with redirect_stderr(buf):
|
||||||
|
l=LogConsole(show_verbose=False, show_debug=False, color=False)
|
||||||
|
l.error("error")
|
||||||
|
|
||||||
|
print(list(buf.getvalue()))
|
||||||
|
self.assertEqual(list(buf.getvalue()), [' ', ' ', 'v', 'e', 'r', 'b', 'o', 's', 'e', '\n', '#', ' ', 'd', 'e', 'b', 'u', 'g', '\n', '!', ' ', 'e', 'r', 'r', 'o', 'r', '\n'])
|
||||||
|
|
||||||
|
|
||||||
|
zfs_autobackup.LogConsole.colorama=False
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -34,7 +34,7 @@ class TestZfsScaling(unittest2.TestCase):
|
|||||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101112000000"):
|
with patch('time.strftime', return_value="20101112000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
||||||
@ -47,7 +47,7 @@ class TestZfsScaling(unittest2.TestCase):
|
|||||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101112000001"):
|
with patch('time.strftime', return_value="20101112000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
||||||
@ -69,11 +69,12 @@ class TestZfsScaling(unittest2.TestCase):
|
|||||||
|
|
||||||
global run_counter
|
global run_counter
|
||||||
|
|
||||||
|
#first run
|
||||||
run_counter=0
|
run_counter=0
|
||||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101112000000"):
|
with patch('time.strftime', return_value="20101112000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
||||||
@ -82,11 +83,12 @@ class TestZfsScaling(unittest2.TestCase):
|
|||||||
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
|
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
|
||||||
|
|
||||||
|
|
||||||
|
#second run, should have higher number of expected_runs
|
||||||
run_counter=0
|
run_counter=0
|
||||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101112000001"):
|
with patch('time.strftime', return_value="20101112000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
||||||
|
|||||||
@ -23,6 +23,23 @@ class TestThinner(unittest2.TestCase):
|
|||||||
|
|
||||||
# return super().setUp()
|
# return super().setUp()
|
||||||
|
|
||||||
|
def test_exceptions(self):
|
||||||
|
with self.assertRaisesRegexp(Exception, "^Invalid period"):
|
||||||
|
ThinnerRule("12X12m")
|
||||||
|
|
||||||
|
with self.assertRaisesRegexp(Exception, "^Invalid ttl"):
|
||||||
|
ThinnerRule("12d12X")
|
||||||
|
|
||||||
|
with self.assertRaisesRegexp(Exception, "^Period cant be"):
|
||||||
|
ThinnerRule("12d1d")
|
||||||
|
|
||||||
|
with self.assertRaisesRegexp(Exception, "^Invalid schedule"):
|
||||||
|
ThinnerRule("XXX")
|
||||||
|
|
||||||
|
with self.assertRaisesRegexp(Exception, "^Number of"):
|
||||||
|
Thinner("-1")
|
||||||
|
|
||||||
|
|
||||||
def test_incremental(self):
|
def test_incremental(self):
|
||||||
ok=['2023-01-03 10:53:16',
|
ok=['2023-01-03 10:53:16',
|
||||||
'2024-01-02 15:43:29',
|
'2024-01-02 15:43:29',
|
||||||
@ -138,5 +155,5 @@ class TestThinner(unittest2.TestCase):
|
|||||||
self.assertEqual(result, ok)
|
self.assertEqual(result, ok)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
# if __name__ == '__main__':
|
||||||
unittest.main()
|
# unittest.main()
|
||||||
|
|||||||
@ -1,8 +1,9 @@
|
|||||||
|
from zfs_autobackup.CmdPipe import CmdPipe
|
||||||
|
|
||||||
from basetest import *
|
from basetest import *
|
||||||
import time
|
import time
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class TestZfsAutobackup(unittest2.TestCase):
|
class TestZfsAutobackup(unittest2.TestCase):
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
@ -11,13 +12,29 @@ class TestZfsAutobackup(unittest2.TestCase):
|
|||||||
|
|
||||||
def test_invalidpars(self):
|
def test_invalidpars(self):
|
||||||
|
|
||||||
self.assertEqual(ZfsAutobackup("test test_target1 --keep-source -1".split(" ")).run(), 255)
|
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --keep-source -1".split(" ")).run(), 255)
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --resume --verbose --no-snapshot".split(" ")).run(), 0)
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
self.assertIn("The --resume", buf.getvalue())
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stderr(buf):
|
||||||
|
self.assertEqual(ZfsAutobackup("test test_target_nonexisting --no-progress".split(" ")).run(), 255)
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
# correct message?
|
||||||
|
self.assertIn("Please create this dataset", buf.getvalue())
|
||||||
|
|
||||||
|
|
||||||
def test_snapshotmode(self):
|
def test_snapshotmode(self):
|
||||||
"""test snapshot tool mode"""
|
"""test snapshot tool mode"""
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -35,15 +52,13 @@ test_source2/fs3/sub
|
|||||||
test_target1
|
test_target1
|
||||||
""")
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def test_defaults(self):
|
def test_defaults(self):
|
||||||
|
|
||||||
with self.subTest("no datasets selected"):
|
with self.subTest("no datasets selected"):
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stderr(buf):
|
with redirect_stderr(buf):
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug --no-progress".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
#correct message?
|
#correct message?
|
||||||
@ -53,7 +68,7 @@ test_target1
|
|||||||
with self.subTest("defaults with full verbose and debug"):
|
with self.subTest("defaults with full verbose and debug"):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -82,7 +97,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
with self.subTest("bare defaults, allow empty"):
|
with self.subTest("bare defaults, allow empty"):
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --no-progress".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -153,14 +168,14 @@ test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
|||||||
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
|
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
|
||||||
with self.subTest("test time checking"):
|
with self.subTest("test time checking"):
|
||||||
with patch('time.strftime', return_value="20111111000000"):
|
with patch('time.strftime', return_value="20111111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --no-progress".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
time_str="20111112000000" #month in the "future"
|
time_str="20111112000000" #month in the "future"
|
||||||
future_timestamp=time_secs=time.mktime(time.strptime(time_str,"%Y%m%d%H%M%S"))
|
future_timestamp=time_secs=time.mktime(time.strptime(time_str,"%Y%m%d%H%M%S"))
|
||||||
with patch('time.time', return_value=future_timestamp):
|
with patch('time.time', return_value=future_timestamp):
|
||||||
with patch('time.strftime', return_value="20111111000001"):
|
with patch('time.strftime', return_value="20111111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y --no-progress".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -194,14 +209,13 @@ test_target1/test_source2/fs2/sub@test-20111111000000
|
|||||||
test_target1/test_source2/fs2/sub@test-20111111000001
|
test_target1/test_source2/fs2/sub@test-20111111000001
|
||||||
""")
|
""")
|
||||||
|
|
||||||
|
|
||||||
def test_ignore_othersnaphots(self):
|
def test_ignore_othersnaphots(self):
|
||||||
|
|
||||||
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
||||||
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -236,7 +250,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --other-snapshots".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --other-snapshots".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -271,7 +285,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
def test_nosnapshot(self):
|
def test_nosnapshot(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
#(only parents are created )
|
#(only parents are created )
|
||||||
@ -295,7 +309,7 @@ test_target1/test_source2/fs2
|
|||||||
def test_nosend(self):
|
def test_nosend(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
#(only parents are created )
|
#(only parents are created )
|
||||||
@ -320,7 +334,7 @@ test_target1
|
|||||||
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
|
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --ignore-replicated".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --ignore-replicated".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
#(only parents are created )
|
#(only parents are created )
|
||||||
@ -351,7 +365,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
def test_noholds(self):
|
def test_noholds(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -383,7 +397,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
|||||||
def test_strippath(self):
|
def test_strippath(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1 --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -419,7 +433,7 @@ test_target1/fs2/sub@test-20101111000000
|
|||||||
r=shelltest("zfs set refreservation=1M test_source1/fs1")
|
r=shelltest("zfs set refreservation=1M test_source1/fs1")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --clear-refreservation".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-refreservation".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs get refreservation -r test_source1 test_source2 test_target1")
|
r=shelltest("zfs get refreservation -r test_source1 test_source2 test_target1")
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -457,7 +471,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 refreservation -
|
|||||||
|
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --clear-mountpoint --debug".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-mountpoint --debug".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs get canmount -r test_source1 test_source2 test_target1")
|
r=shelltest("zfs get canmount -r test_source1 test_source2 test_target1")
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -490,7 +504,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
|||||||
|
|
||||||
#initial backup
|
#initial backup
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
#make change
|
#make change
|
||||||
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
||||||
@ -498,18 +512,18 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
|||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
#should fail (busy)
|
#should fail (busy)
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
#rollback, should succeed
|
#rollback, should succeed
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --rollback".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --rollback".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
def test_destroyincompat(self):
|
def test_destroyincompat(self):
|
||||||
|
|
||||||
#initial backup
|
#initial backup
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
#add multiple compatible snapshot (written is still 0)
|
#add multiple compatible snapshot (written is still 0)
|
||||||
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
|
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
|
||||||
@ -517,7 +531,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
|||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
#should be ok, is compatible
|
#should be ok, is compatible
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
#add incompatible snapshot by changing and snapshotting
|
#add incompatible snapshot by changing and snapshotting
|
||||||
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
||||||
@ -527,19 +541,19 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
|||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
#--test should fail, now incompatible
|
#--test should fail, now incompatible
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty --test".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --test".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
#should fail, now incompatible
|
#should fail, now incompatible
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000003"):
|
with patch('time.strftime', return_value="20101111000003"):
|
||||||
#--test should succeed by destroying incompatibles
|
#--test should succeed by destroying incompatibles
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000003"):
|
with patch('time.strftime', return_value="20101111000003"):
|
||||||
#should succeed by destroying incompatibles
|
#should succeed by destroying incompatibles
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --destroy-incompatible".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
self.assertMultiLineEqual(r, """
|
self.assertMultiLineEqual(r, """
|
||||||
@ -576,13 +590,13 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
#test all ssh directions
|
#test all ssh directions
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-source localhost".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-target localhost".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-target localhost".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -627,7 +641,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
|
|
||||||
#initial
|
#initial
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
#make small change, use umount to reflect the changes immediately
|
#make small change, use umount to reflect the changes immediately
|
||||||
r=shelltest("zfs set compress=off test_source1")
|
r=shelltest("zfs set compress=off test_source1")
|
||||||
@ -637,7 +651,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
|
|
||||||
#too small change, takes no snapshots
|
#too small change, takes no snapshots
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
#make big change
|
#make big change
|
||||||
r=shelltest("dd if=/dev/zero of=/test_source1/fs1/change.txt bs=200000 count=1")
|
r=shelltest("dd if=/dev/zero of=/test_source1/fs1/change.txt bs=200000 count=1")
|
||||||
@ -645,7 +659,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
|
|
||||||
#bigger change, should take snapshot
|
#bigger change, should take snapshot
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -678,7 +692,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
#initial
|
#initial
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -695,12 +709,12 @@ test_target1
|
|||||||
|
|
||||||
#actual make initial backup
|
#actual make initial backup
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#test incremental
|
#test incremental
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --allow-empty --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -736,7 +750,7 @@ test_target1/test_source2/fs2/sub@test-20101111000001
|
|||||||
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
|
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
@ -769,15 +783,15 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
"""test if keep-source=0 and keep-target=0 dont delete common snapshot and break backup"""
|
"""test if keep-source=0 and keep-target=0 dont delete common snapshot and break backup"""
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-source=0 --keep-target=0".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0".split(" ")).run())
|
||||||
|
|
||||||
#make snapshot, shouldnt delete 0
|
#make snapshot, shouldnt delete 0
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
#make snapshot 2, shouldnt delete 0 since it has holds, but will delete 1 since it has no holds
|
#make snapshot 2, shouldnt delete 0 since it has holds, but will delete 1 since it has no holds
|
||||||
with patch('time.strftime', return_value="20101111000002"):
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r, """
|
self.assertMultiLineEqual(r, """
|
||||||
@ -809,7 +823,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
#make another backup but with no-holds. we should naturally endup with only number 3
|
#make another backup but with no-holds. we should naturally endup with only number 3
|
||||||
with patch('time.strftime', return_value="20101111000003"):
|
with patch('time.strftime', return_value="20101111000003"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-source=0 --keep-target=0 --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0 --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r, """
|
self.assertMultiLineEqual(r, """
|
||||||
@ -839,7 +853,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
|
|
||||||
# make snapshot 4, since we used no-holds, it will delete 3 on the source, breaking the backup
|
# make snapshot 4, since we used no-holds, it will delete 3 on the source, breaking the backup
|
||||||
with patch('time.strftime', return_value="20101111000004"):
|
with patch('time.strftime', return_value="20101111000004"):
|
||||||
self.assertFalse(ZfsAutobackup("test --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r, """
|
self.assertMultiLineEqual(r, """
|
||||||
@ -866,9 +880,26 @@ test_target1/test_source2/fs2/sub
|
|||||||
test_target1/test_source2/fs2/sub@test-20101111000003
|
test_target1/test_source2/fs2/sub@test-20101111000003
|
||||||
""")
|
""")
|
||||||
|
|
||||||
###########################
|
|
||||||
# TODO:
|
|
||||||
|
|
||||||
def test_raw(self):
|
def test_progress(self):
|
||||||
|
|
||||||
self.skipTest("todo: later when travis supports zfs 0.8")
|
r=shelltest("dd if=/dev/zero of=/test_source1/data.txt bs=200000 count=1")
|
||||||
|
r = shelltest("zfs snapshot test_source1@test")
|
||||||
|
|
||||||
|
l=LogConsole(show_verbose=True, show_debug=False, color=False)
|
||||||
|
n=ZfsNode("test",l)
|
||||||
|
d=ZfsDataset(n,"test_source1@test")
|
||||||
|
|
||||||
|
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, output_pipes=[], send_properties=True, write_embedded=True)
|
||||||
|
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stderr(buf):
|
||||||
|
try:
|
||||||
|
n.run(["sleep", "2"], inp=sp)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
# correct message?
|
||||||
|
self.assertRegex(buf.getvalue(),".*>>> .*minutes left.*")
|
||||||
|
|||||||
@ -10,10 +10,10 @@ class TestZfsAutobackup31(unittest2.TestCase):
|
|||||||
def test_no_thinning(self):
|
def test_no_thinning(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="20101111000001"):
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-target=0 --keep-source=0 --no-thinning".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --keep-target=0 --keep-source=0 --no-thinning".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
|
|||||||
126
zfs_autobackup/CmdPipe.py
Normal file
126
zfs_autobackup/CmdPipe.py
Normal file
@ -0,0 +1,126 @@
|
|||||||
|
import subprocess
|
||||||
|
import os
|
||||||
|
import select
|
||||||
|
|
||||||
|
class CmdPipe:
|
||||||
|
"""a pipe of one or more commands. also takes care of utf-8 encoding/decoding and line based parsing"""
|
||||||
|
|
||||||
|
def __init__(self, readonly=False, inp=None):
|
||||||
|
"""
|
||||||
|
:param inp: input string for stdin
|
||||||
|
:param readonly: Only execute if entire pipe consist of readonly commands
|
||||||
|
"""
|
||||||
|
# list of commands + error handlers to execute
|
||||||
|
self.items = []
|
||||||
|
|
||||||
|
self.inp = inp
|
||||||
|
self.readonly = readonly
|
||||||
|
self._should_execute = True
|
||||||
|
|
||||||
|
def add(self, cmd, readonly=False, stderr_handler=None):
|
||||||
|
"""adds a command to pipe"""
|
||||||
|
|
||||||
|
self.items.append({
|
||||||
|
'cmd': cmd,
|
||||||
|
'stderr_handler': stderr_handler
|
||||||
|
})
|
||||||
|
|
||||||
|
if not readonly and self.readonly:
|
||||||
|
self._should_execute = False
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
"""transform into oneliner for debugging and testing """
|
||||||
|
|
||||||
|
#just one command?
|
||||||
|
if len(self.items)==1:
|
||||||
|
return " ".join(self.items[0]['cmd'])
|
||||||
|
|
||||||
|
#an actual pipe
|
||||||
|
ret = ""
|
||||||
|
for item in self.items:
|
||||||
|
if ret:
|
||||||
|
ret = ret + " | "
|
||||||
|
ret = ret + "(" + " ".join(item['cmd']) + ")"
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def should_execute(self):
|
||||||
|
return(self._should_execute)
|
||||||
|
|
||||||
|
def execute(self, stdout_handler):
|
||||||
|
"""run the pipe. returns True if it executed, and false if it skipped due to readonly conditions"""
|
||||||
|
|
||||||
|
if not self._should_execute:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# first process should have actual user input as stdin:
|
||||||
|
selectors = []
|
||||||
|
|
||||||
|
# create processes
|
||||||
|
last_stdout = None
|
||||||
|
stdin = subprocess.PIPE
|
||||||
|
for item in self.items:
|
||||||
|
|
||||||
|
# make sure the command gets all the data in utf8 format:
|
||||||
|
# (this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
|
||||||
|
encoded_cmd = []
|
||||||
|
for arg in item['cmd']:
|
||||||
|
encoded_cmd.append(arg.encode('utf-8'))
|
||||||
|
|
||||||
|
item['process'] = subprocess.Popen(encoded_cmd, env=os.environ, stdout=subprocess.PIPE, stdin=stdin,
|
||||||
|
stderr=subprocess.PIPE)
|
||||||
|
|
||||||
|
selectors.append(item['process'].stderr)
|
||||||
|
|
||||||
|
if last_stdout is None:
|
||||||
|
# we're the first process in the pipe, do we have some input?
|
||||||
|
if self.inp is not None:
|
||||||
|
# TODO: make streaming to support big inputs?
|
||||||
|
item['process'].stdin.write(self.inp.encode('utf-8'))
|
||||||
|
item['process'].stdin.close()
|
||||||
|
else:
|
||||||
|
#last stdout was piped to this stdin already, so close it because we dont need it anymore
|
||||||
|
last_stdout.close()
|
||||||
|
|
||||||
|
last_stdout = item['process'].stdout
|
||||||
|
stdin=last_stdout
|
||||||
|
|
||||||
|
# monitor last stdout as well
|
||||||
|
selectors.append(last_stdout)
|
||||||
|
|
||||||
|
while True:
|
||||||
|
# wait for output on one of the stderrs or last_stdout
|
||||||
|
(read_ready, write_ready, ex_ready) = select.select(selectors, [], [])
|
||||||
|
eof_count = 0
|
||||||
|
done_count = 0
|
||||||
|
|
||||||
|
# read line and call appropriate handlers
|
||||||
|
if last_stdout in read_ready:
|
||||||
|
line = last_stdout.readline().decode('utf-8').rstrip()
|
||||||
|
if line != "":
|
||||||
|
stdout_handler(line)
|
||||||
|
else:
|
||||||
|
eof_count = eof_count + 1
|
||||||
|
|
||||||
|
for item in self.items:
|
||||||
|
if item['process'].stderr in read_ready:
|
||||||
|
line = item['process'].stderr.readline().decode('utf-8').rstrip()
|
||||||
|
if line != "":
|
||||||
|
item['stderr_handler'](line)
|
||||||
|
else:
|
||||||
|
eof_count = eof_count + 1
|
||||||
|
|
||||||
|
if item['process'].poll() is not None:
|
||||||
|
done_count = done_count + 1
|
||||||
|
|
||||||
|
# all filehandles are eof and all processes are done (poll() is not None)
|
||||||
|
if eof_count == len(selectors) and done_count == len(self.items):
|
||||||
|
break
|
||||||
|
|
||||||
|
# ret = []
|
||||||
|
last_stdout.close()
|
||||||
|
for item in self.items:
|
||||||
|
item['process'].stderr.close()
|
||||||
|
# ret.append(item['process'].returncode)
|
||||||
|
|
||||||
|
return True
|
||||||
@ -2,6 +2,7 @@ import os
|
|||||||
import select
|
import select
|
||||||
import subprocess
|
import subprocess
|
||||||
|
|
||||||
|
from zfs_autobackup.CmdPipe import CmdPipe
|
||||||
from zfs_autobackup.LogStub import LogStub
|
from zfs_autobackup.LogStub import LogStub
|
||||||
|
|
||||||
|
|
||||||
@ -38,175 +39,112 @@ class ExecuteNode(LogStub):
|
|||||||
else:
|
else:
|
||||||
self.error("STDERR > " + line.rstrip())
|
self.error("STDERR > " + line.rstrip())
|
||||||
|
|
||||||
def _parse_stderr_pipe(self, line, hide_errors):
|
# def _parse_stderr_pipe(self, line, hide_errors):
|
||||||
"""parse stderr from pipe input process. can be overridden in subclass"""
|
# """parse stderr from pipe input process. can be overridden in subclass"""
|
||||||
if hide_errors:
|
# if hide_errors:
|
||||||
self.debug("STDERR|> " + line.rstrip())
|
# self.debug("STDERR|> " + line.rstrip())
|
||||||
else:
|
# else:
|
||||||
self.error("STDERR|> " + line.rstrip())
|
# self.error("STDERR|> " + line.rstrip())
|
||||||
|
|
||||||
def _encode_cmd(self, cmd):
|
def _remote_cmd(self, cmd):
|
||||||
"""returns cmd in encoded and escaped form that can be used with popen."""
|
"""transforms cmd in correct form for remote over ssh, if needed"""
|
||||||
|
|
||||||
encoded_cmd=[]
|
|
||||||
|
|
||||||
# make sure the command gets all the data in utf8 format:
|
|
||||||
# (this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
|
|
||||||
|
|
||||||
# use ssh?
|
# use ssh?
|
||||||
if self.ssh_to is not None:
|
if self.ssh_to is not None:
|
||||||
encoded_cmd.append("ssh".encode('utf-8'))
|
encoded_cmd = []
|
||||||
|
encoded_cmd.append("ssh")
|
||||||
|
|
||||||
if self.ssh_config is not None:
|
if self.ssh_config is not None:
|
||||||
encoded_cmd.extend(["-F".encode('utf-8'), self.ssh_config.encode('utf-8')])
|
encoded_cmd.extend(["-F", self.ssh_config])
|
||||||
|
|
||||||
encoded_cmd.append(self.ssh_to.encode('utf-8'))
|
encoded_cmd.append(self.ssh_to)
|
||||||
|
|
||||||
for arg in cmd:
|
for arg in cmd:
|
||||||
# add single quotes for remote commands to support spaces and other weird stuff (remote commands are
|
# add single quotes for remote commands to support spaces and other weird stuff (remote commands are
|
||||||
# executed in a shell) and escape existing single quotes (bash needs ' to end the quoted string,
|
# executed in a shell) and escape existing single quotes (bash needs ' to end the quoted string,
|
||||||
# then a \' for the actual quote and then another ' to start a new quoted string) (and then python
|
# then a \' for the actual quote and then another ' to start a new quoted string) (and then python
|
||||||
# needs the double \ to get a single \)
|
# needs the double \ to get a single \)
|
||||||
encoded_cmd.append(("'" + arg.replace("'", "'\\''") + "'").encode('utf-8'))
|
encoded_cmd.append(("'" + arg.replace("'", "'\\''") + "'"))
|
||||||
|
|
||||||
else:
|
|
||||||
for arg in cmd:
|
|
||||||
encoded_cmd.append(arg.encode('utf-8'))
|
|
||||||
|
|
||||||
return encoded_cmd
|
return encoded_cmd
|
||||||
|
else:
|
||||||
|
return(cmd)
|
||||||
|
|
||||||
def run(self, cmd, inp=None, tab_split=False, valid_exitcodes=None, readonly=False, hide_errors=False, pipe=False,
|
|
||||||
return_stderr=False):
|
def is_local(self):
|
||||||
"""run a command on the node.
|
return self.ssh_to is None
|
||||||
|
|
||||||
|
|
||||||
|
def run(self, cmd, inp=None, tab_split=False, valid_exitcodes=None, readonly=False, hide_errors=False,
|
||||||
|
return_stderr=False, pipe=False):
|
||||||
|
"""run a command on the node , checks output and parses/handle output and returns it
|
||||||
|
|
||||||
:param cmd: the actual command, should be a list, where the first item is the command
|
:param cmd: the actual command, should be a list, where the first item is the command
|
||||||
and the rest are parameters.
|
and the rest are parameters.
|
||||||
:param inp: Can be None, a string or a pipe-handle you got from another run()
|
:param pipe: return CmdPipe instead of executing it.
|
||||||
|
:param inp: Can be None, a string or a CmdPipe that was previously returned.
|
||||||
:param tab_split: split tabbed files in output into a list
|
:param tab_split: split tabbed files in output into a list
|
||||||
:param valid_exitcodes: list of valid exit codes for this command (checks exit code of both sides of a pipe)
|
:param valid_exitcodes: list of valid exit codes for this command (checks exit code of both sides of a pipe)
|
||||||
Use [] to accept all exit codes.
|
Use [] to accept all exit codes. Default [0]
|
||||||
:param readonly: make this True if the command doesn't make any changes and is safe to execute in testmode
|
:param readonly: make this True if the command doesn't make any changes and is safe to execute in testmode
|
||||||
:param hide_errors: don't show stderr output as error, instead show it as debugging output (use to hide expected errors)
|
:param hide_errors: don't show stderr output as error, instead show it as debugging output (use to hide expected errors)
|
||||||
:param pipe: Instead of executing, return a pipe-handle to be used to
|
|
||||||
input to another run() command. (just like a | in linux)
|
|
||||||
:param return_stderr: return both stdout and stderr as a tuple. (normally only returns stdout)
|
:param return_stderr: return both stdout and stderr as a tuple. (normally only returns stdout)
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if valid_exitcodes is None:
|
# create new pipe?
|
||||||
valid_exitcodes = [0]
|
if not isinstance(inp, CmdPipe):
|
||||||
|
p = CmdPipe(self.readonly, inp)
|
||||||
encoded_cmd = self._encode_cmd(cmd)
|
|
||||||
|
|
||||||
# debug and test stuff
|
|
||||||
debug_txt = ""
|
|
||||||
for c in encoded_cmd:
|
|
||||||
debug_txt = debug_txt + " " + c.decode()
|
|
||||||
|
|
||||||
if pipe:
|
|
||||||
debug_txt = debug_txt + " |"
|
|
||||||
|
|
||||||
if self.readonly and not readonly:
|
|
||||||
self.debug("SKIP > " + debug_txt)
|
|
||||||
else:
|
else:
|
||||||
if pipe:
|
# add stuff to existing pipe
|
||||||
self.debug("PIPE > " + debug_txt)
|
p = inp
|
||||||
else:
|
|
||||||
self.debug("RUN > " + debug_txt)
|
|
||||||
|
|
||||||
# determine stdin
|
# stderr parser
|
||||||
if inp is None:
|
|
||||||
# NOTE: Not None, otherwise it reads stdin from terminal!
|
|
||||||
stdin = subprocess.PIPE
|
|
||||||
elif isinstance(inp, str) or type(inp) == 'unicode':
|
|
||||||
self.debug("INPUT > \n" + inp.rstrip())
|
|
||||||
stdin = subprocess.PIPE
|
|
||||||
elif isinstance(inp, subprocess.Popen):
|
|
||||||
self.debug("Piping input")
|
|
||||||
stdin = inp.stdout
|
|
||||||
else:
|
|
||||||
raise (Exception("Program error: Incompatible input"))
|
|
||||||
|
|
||||||
if self.readonly and not readonly:
|
|
||||||
# todo: what happens if input is piped?
|
|
||||||
return
|
|
||||||
|
|
||||||
# execute and parse/return results
|
|
||||||
p = subprocess.Popen(encoded_cmd, env=os.environ, stdout=subprocess.PIPE, stdin=stdin, stderr=subprocess.PIPE)
|
|
||||||
|
|
||||||
# Note: make streaming?
|
|
||||||
if isinstance(inp, str) or type(inp) == 'unicode':
|
|
||||||
p.stdin.write(inp.encode('utf-8'))
|
|
||||||
|
|
||||||
if p.stdin:
|
|
||||||
p.stdin.close()
|
|
||||||
|
|
||||||
# return pipe
|
|
||||||
if pipe:
|
|
||||||
return p
|
|
||||||
|
|
||||||
# handle all outputs
|
|
||||||
if isinstance(inp, subprocess.Popen):
|
|
||||||
selectors = [p.stdout, p.stderr, inp.stderr]
|
|
||||||
inp.stdout.close() # otherwise inputprocess wont exit when ours does
|
|
||||||
else:
|
|
||||||
selectors = [p.stdout, p.stderr]
|
|
||||||
|
|
||||||
output_lines = []
|
|
||||||
error_lines = []
|
error_lines = []
|
||||||
while True:
|
def stderr_handler(line):
|
||||||
(read_ready, write_ready, ex_ready) = select.select(selectors, [], [])
|
|
||||||
eof_count = 0
|
|
||||||
if p.stdout in read_ready:
|
|
||||||
line = p.stdout.readline().decode('utf-8')
|
|
||||||
if line != "":
|
|
||||||
if tab_split:
|
|
||||||
output_lines.append(line.rstrip().split('\t'))
|
|
||||||
else:
|
|
||||||
output_lines.append(line.rstrip())
|
|
||||||
self._parse_stdout(line)
|
|
||||||
else:
|
|
||||||
eof_count = eof_count + 1
|
|
||||||
if p.stderr in read_ready:
|
|
||||||
line = p.stderr.readline().decode('utf-8')
|
|
||||||
if line != "":
|
|
||||||
if tab_split:
|
if tab_split:
|
||||||
error_lines.append(line.rstrip().split('\t'))
|
error_lines.append(line.rstrip().split('\t'))
|
||||||
else:
|
else:
|
||||||
error_lines.append(line.rstrip())
|
error_lines.append(line.rstrip())
|
||||||
self._parse_stderr(line, hide_errors)
|
self._parse_stderr(line, hide_errors)
|
||||||
else:
|
|
||||||
eof_count = eof_count + 1
|
|
||||||
if isinstance(inp, subprocess.Popen) and (inp.stderr in read_ready):
|
|
||||||
line = inp.stderr.readline().decode('utf-8')
|
|
||||||
if line != "":
|
|
||||||
self._parse_stderr_pipe(line, hide_errors)
|
|
||||||
else:
|
|
||||||
eof_count = eof_count + 1
|
|
||||||
|
|
||||||
# stop if both processes are done and all filehandles are EOF:
|
# add command to pipe
|
||||||
if (p.poll() is not None) and (
|
encoded_cmd = self._remote_cmd(cmd)
|
||||||
(not isinstance(inp, subprocess.Popen)) or inp.poll() is not None) and eof_count == len(selectors):
|
p.add(cmd=encoded_cmd, readonly=readonly, stderr_handler=stderr_handler)
|
||||||
break
|
|
||||||
|
|
||||||
p.stderr.close()
|
# return pipe instead of executing?
|
||||||
p.stdout.close()
|
if pipe:
|
||||||
|
return p
|
||||||
|
|
||||||
|
# stdout parser
|
||||||
|
output_lines = []
|
||||||
|
def stdout_handler(line):
|
||||||
|
if tab_split:
|
||||||
|
output_lines.append(line.rstrip().split('\t'))
|
||||||
|
else:
|
||||||
|
output_lines.append(line.rstrip())
|
||||||
|
self._parse_stdout(line)
|
||||||
|
|
||||||
|
if p.should_execute():
|
||||||
|
self.debug("CMD > {}".format(p))
|
||||||
|
else:
|
||||||
|
self.debug("CMDSKIP> {}".format(p))
|
||||||
|
|
||||||
|
# execute and verify exit codes
|
||||||
|
if p.execute(stdout_handler=stdout_handler) and valid_exitcodes is not []:
|
||||||
|
if valid_exitcodes is None:
|
||||||
|
valid_exitcodes = [0]
|
||||||
|
|
||||||
|
item_nr=1
|
||||||
|
for item in p.items:
|
||||||
|
exit_code=item['process'].returncode
|
||||||
|
|
||||||
if self.debug_output:
|
if self.debug_output:
|
||||||
self.debug("EXIT > {}".format(p.returncode))
|
self.debug("EXIT{} > {}".format(item_nr, exit_code))
|
||||||
|
|
||||||
# handle piped process error output and exit codes
|
if exit_code not in valid_exitcodes:
|
||||||
if isinstance(inp, subprocess.Popen):
|
raise (subprocess.CalledProcessError(exit_code, " ".join(item['cmd'])))
|
||||||
inp.stderr.close()
|
item_nr=item_nr+1
|
||||||
inp.stdout.close()
|
|
||||||
|
|
||||||
if self.debug_output:
|
|
||||||
self.debug("EXIT |> {}".format(inp.returncode))
|
|
||||||
if valid_exitcodes and inp.returncode not in valid_exitcodes:
|
|
||||||
raise (subprocess.CalledProcessError(inp.returncode, "(pipe)"))
|
|
||||||
|
|
||||||
if valid_exitcodes and p.returncode not in valid_exitcodes:
|
|
||||||
raise (subprocess.CalledProcessError(p.returncode, encoded_cmd))
|
|
||||||
|
|
||||||
if return_stderr:
|
if return_stderr:
|
||||||
return output_lines, error_lines
|
return output_lines, error_lines
|
||||||
|
|||||||
@ -3,27 +3,29 @@ from __future__ import print_function
|
|||||||
|
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
|
|
||||||
colorama = False
|
|
||||||
if sys.stdout.isatty():
|
|
||||||
try:
|
|
||||||
import colorama
|
|
||||||
except ImportError:
|
|
||||||
colorama = False
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class LogConsole:
|
class LogConsole:
|
||||||
"""Log-class that outputs to console, adding colors if needed"""
|
"""Log-class that outputs to console, adding colors if needed"""
|
||||||
|
|
||||||
def __init__(self, show_debug=False, show_verbose=False):
|
def __init__(self, show_debug, show_verbose, color):
|
||||||
self.last_log = ""
|
self.last_log = ""
|
||||||
self.show_debug = show_debug
|
self.show_debug = show_debug
|
||||||
self.show_verbose = show_verbose
|
self.show_verbose = show_verbose
|
||||||
|
|
||||||
@staticmethod
|
if color:
|
||||||
def error(txt):
|
# try to use color, failback if colorama not available
|
||||||
if colorama:
|
self.colorama=False
|
||||||
|
try:
|
||||||
|
import colorama
|
||||||
|
global colorama
|
||||||
|
self.colorama = True
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
else:
|
||||||
|
self.colorama=False
|
||||||
|
|
||||||
|
def error(self, txt):
|
||||||
|
if self.colorama:
|
||||||
print(colorama.Fore.RED + colorama.Style.BRIGHT + "! " + txt + colorama.Style.RESET_ALL, file=sys.stderr)
|
print(colorama.Fore.RED + colorama.Style.BRIGHT + "! " + txt + colorama.Style.RESET_ALL, file=sys.stderr)
|
||||||
else:
|
else:
|
||||||
print("! " + txt, file=sys.stderr)
|
print("! " + txt, file=sys.stderr)
|
||||||
@ -31,7 +33,7 @@ class LogConsole:
|
|||||||
|
|
||||||
def verbose(self, txt):
|
def verbose(self, txt):
|
||||||
if self.show_verbose:
|
if self.show_verbose:
|
||||||
if colorama:
|
if self.colorama:
|
||||||
print(colorama.Style.NORMAL + " " + txt + colorama.Style.RESET_ALL)
|
print(colorama.Style.NORMAL + " " + txt + colorama.Style.RESET_ALL)
|
||||||
else:
|
else:
|
||||||
print(" " + txt)
|
print(" " + txt)
|
||||||
@ -39,7 +41,7 @@ class LogConsole:
|
|||||||
|
|
||||||
def debug(self, txt):
|
def debug(self, txt):
|
||||||
if self.show_debug:
|
if self.show_debug:
|
||||||
if colorama:
|
if self.colorama:
|
||||||
print(colorama.Fore.GREEN + "# " + txt + colorama.Style.RESET_ALL)
|
print(colorama.Fore.GREEN + "# " + txt + colorama.Style.RESET_ALL)
|
||||||
else:
|
else:
|
||||||
print("# " + txt)
|
print("# " + txt)
|
||||||
|
|||||||
@ -7,8 +7,9 @@ class Thinner:
|
|||||||
"""progressive thinner (universal, used for cleaning up snapshots)"""
|
"""progressive thinner (universal, used for cleaning up snapshots)"""
|
||||||
|
|
||||||
def __init__(self, schedule_str=""):
|
def __init__(self, schedule_str=""):
|
||||||
"""schedule_str: comma seperated list of ThinnerRules. A plain number specifies how many snapshots to always
|
"""
|
||||||
keep.
|
Args:
|
||||||
|
schedule_str: comma seperated list of ThinnerRules. A plain number specifies how many snapshots to always keep.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
self.rules = []
|
self.rules = []
|
||||||
@ -19,7 +20,7 @@ class Thinner:
|
|||||||
|
|
||||||
rule_strs = schedule_str.split(",")
|
rule_strs = schedule_str.split(",")
|
||||||
for rule_str in rule_strs:
|
for rule_str in rule_strs:
|
||||||
if rule_str.isdigit():
|
if rule_str.lstrip('-').isdigit():
|
||||||
self.always_keep = int(rule_str)
|
self.always_keep = int(rule_str)
|
||||||
if self.always_keep < 0:
|
if self.always_keep < 0:
|
||||||
raise (Exception("Number of snapshots to keep cant be negative: {}".format(self.always_keep)))
|
raise (Exception("Number of snapshots to keep cant be negative: {}".format(self.always_keep)))
|
||||||
@ -37,11 +38,15 @@ class Thinner:
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
def thin(self, objects, keep_objects=None, now=None):
|
def thin(self, objects, keep_objects=None, now=None):
|
||||||
"""thin list of objects with current schedule rules. objects: list of objects to thin. every object should
|
"""thin list of objects with current schedule rules. objects: list of
|
||||||
have timestamp attribute. keep_objects: objects to always keep (these should also be in normal objects list,
|
objects to thin. every object should have timestamp attribute.
|
||||||
so we can use them to perhaps delete other obsolete objects)
|
|
||||||
|
|
||||||
return( keeps, removes )
|
return( keeps, removes )
|
||||||
|
|
||||||
|
Args:
|
||||||
|
objects: list of objects to check (should have a timestamp attribute)
|
||||||
|
keep_objects: objects to always keep (if they also are in the in the normal objects list)
|
||||||
|
now: if specified, use this time as current time
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if not keep_objects:
|
if not keep_objects:
|
||||||
|
|||||||
@ -39,6 +39,9 @@ class ThinnerRule:
|
|||||||
rule_str = rule_str.lower()
|
rule_str = rule_str.lower()
|
||||||
matches = re.findall("([0-9]*)([a-z]*)([0-9]*)([a-z]*)", rule_str)[0]
|
matches = re.findall("([0-9]*)([a-z]*)([0-9]*)([a-z]*)", rule_str)[0]
|
||||||
|
|
||||||
|
if '' in matches:
|
||||||
|
raise (Exception("Invalid schedule string: '{}'".format(rule_str)))
|
||||||
|
|
||||||
period_amount = int(matches[0])
|
period_amount = int(matches[0])
|
||||||
period_unit = matches[1]
|
period_unit = matches[1]
|
||||||
ttl_amount = int(matches[2])
|
ttl_amount = int(matches[2])
|
||||||
|
|||||||
@ -12,7 +12,7 @@ from zfs_autobackup.ThinnerRule import ThinnerRule
|
|||||||
class ZfsAutobackup:
|
class ZfsAutobackup:
|
||||||
"""main class"""
|
"""main class"""
|
||||||
|
|
||||||
VERSION = "3.1-beta2"
|
VERSION = "3.1-beta3"
|
||||||
HEADER = "zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)".format(VERSION)
|
HEADER = "zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)".format(VERSION)
|
||||||
|
|
||||||
def __init__(self, argv, print_arguments=True):
|
def __init__(self, argv, print_arguments=True):
|
||||||
@ -90,7 +90,13 @@ class ZfsAutobackup:
|
|||||||
help='Ignore transfer errors (still checks if received filesystem exists. useful for '
|
help='Ignore transfer errors (still checks if received filesystem exists. useful for '
|
||||||
'acltype errors)')
|
'acltype errors)')
|
||||||
parser.add_argument('--raw', action='store_true',
|
parser.add_argument('--raw', action='store_true',
|
||||||
help='For encrypted datasets, send data exactly as it exists on disk.')
|
help=argparse.SUPPRESS)
|
||||||
|
|
||||||
|
parser.add_argument('--decrypt', action='store_true',
|
||||||
|
help='Decrypt data before sending it over.')
|
||||||
|
|
||||||
|
parser.add_argument('--encrypt', action='store_true',
|
||||||
|
help='Encrypt data after receiving it.')
|
||||||
|
|
||||||
parser.add_argument('--test', action='store_true',
|
parser.add_argument('--test', action='store_true',
|
||||||
help='dont change anything, just show what would be done (still does all read-only '
|
help='dont change anything, just show what would be done (still does all read-only '
|
||||||
@ -104,12 +110,11 @@ class ZfsAutobackup:
|
|||||||
help='show zfs progress output. Enabled automaticly on ttys. (use --no-progress to disable)')
|
help='show zfs progress output. Enabled automaticly on ttys. (use --no-progress to disable)')
|
||||||
parser.add_argument('--no-progress', action='store_true', help=argparse.SUPPRESS) # needed to workaround a zfs recv -v bug
|
parser.add_argument('--no-progress', action='store_true', help=argparse.SUPPRESS) # needed to workaround a zfs recv -v bug
|
||||||
|
|
||||||
# parser.add_argument('--output-pipe', metavar="COMMAND", default=[], action='append',
|
parser.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
|
||||||
# help='add zfs send output pipe command')
|
help='pipe zfs send output through COMMAND')
|
||||||
#
|
|
||||||
# parser.add_argument('--input-pipe', metavar="COMMAND", default=[], action='append',
|
|
||||||
# help='add zfs recv input pipe command')
|
|
||||||
|
|
||||||
|
parser.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
|
||||||
|
help='pipe zfs recv input through COMMAND')
|
||||||
|
|
||||||
# note args is the only global variable we use, since its a global readonly setting anyway
|
# note args is the only global variable we use, since its a global readonly setting anyway
|
||||||
args = parser.parse_args(argv)
|
args = parser.parse_args(argv)
|
||||||
@ -132,11 +137,14 @@ class ZfsAutobackup:
|
|||||||
if args.destroy_incompatible:
|
if args.destroy_incompatible:
|
||||||
args.rollback = True
|
args.rollback = True
|
||||||
|
|
||||||
self.log = LogConsole(show_debug=self.args.debug, show_verbose=self.args.verbose)
|
self.log = LogConsole(show_debug=self.args.debug, show_verbose=self.args.verbose, color=sys.stdout.isatty())
|
||||||
|
|
||||||
if args.resume:
|
if args.resume:
|
||||||
self.verbose("NOTE: The --resume option isn't needed anymore (its autodetected now)")
|
self.verbose("NOTE: The --resume option isn't needed anymore (its autodetected now)")
|
||||||
|
|
||||||
|
if args.raw:
|
||||||
|
self.verbose("NOTE: The --raw option isn't needed anymore (its autodetected now). Use --decrypt to explicitly send data decrypted.")
|
||||||
|
|
||||||
if args.target_path is not None and args.target_path[0] == "/":
|
if args.target_path is not None and args.target_path[0] == "/":
|
||||||
self.log.error("Target should not start with a /")
|
self.log.error("Target should not start with a /")
|
||||||
sys.exit(255)
|
sys.exit(255)
|
||||||
@ -223,7 +231,11 @@ class ZfsAutobackup:
|
|||||||
|
|
||||||
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
||||||
def sync_datasets(self, source_node, source_datasets, target_node):
|
def sync_datasets(self, source_node, source_datasets, target_node):
|
||||||
"""Sync datasets, or thin-only on both sides"""
|
"""Sync datasets, or thin-only on both sides
|
||||||
|
:type target_node: ZfsNode
|
||||||
|
:type source_datasets: list of ZfsDataset
|
||||||
|
:type source_node: ZfsNode
|
||||||
|
"""
|
||||||
|
|
||||||
fail_count = 0
|
fail_count = 0
|
||||||
target_datasets = []
|
target_datasets = []
|
||||||
@ -253,10 +265,10 @@ class ZfsAutobackup:
|
|||||||
set_properties=self.set_properties_list(),
|
set_properties=self.set_properties_list(),
|
||||||
ignore_recv_exit_code=self.args.ignore_transfer_errors,
|
ignore_recv_exit_code=self.args.ignore_transfer_errors,
|
||||||
holds=not self.args.no_holds, rollback=self.args.rollback,
|
holds=not self.args.no_holds, rollback=self.args.rollback,
|
||||||
raw=self.args.raw, also_other_snapshots=self.args.other_snapshots,
|
also_other_snapshots=self.args.other_snapshots,
|
||||||
no_send=self.args.no_send,
|
no_send=self.args.no_send,
|
||||||
destroy_incompatible=self.args.destroy_incompatible,
|
destroy_incompatible=self.args.destroy_incompatible,
|
||||||
no_thinning=self.args.no_thinning)
|
output_pipes=self.args.send_pipe, input_pipes=self.args.recv_pipe, decrypt=self.args.decrypt, encrypt=self.args.encrypt)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
fail_count = fail_count + 1
|
fail_count = fail_count + 1
|
||||||
source_dataset.error("FAILED: " + str(e))
|
source_dataset.error("FAILED: " + str(e))
|
||||||
@ -264,7 +276,6 @@ class ZfsAutobackup:
|
|||||||
raise
|
raise
|
||||||
|
|
||||||
target_path_dataset = ZfsDataset(target_node, self.args.target_path)
|
target_path_dataset = ZfsDataset(target_node, self.args.target_path)
|
||||||
if not self.args.no_thinning:
|
|
||||||
self.thin_missing_targets(target_dataset=target_path_dataset, used_target_datasets=target_datasets)
|
self.thin_missing_targets(target_dataset=target_path_dataset, used_target_datasets=target_datasets)
|
||||||
|
|
||||||
if self.args.destroy_missing is not None:
|
if self.args.destroy_missing is not None:
|
||||||
@ -274,6 +285,7 @@ class ZfsAutobackup:
|
|||||||
|
|
||||||
def thin_source(self, source_datasets):
|
def thin_source(self, source_datasets):
|
||||||
|
|
||||||
|
if not self.args.no_thinning:
|
||||||
self.set_title("Thinning source")
|
self.set_title("Thinning source")
|
||||||
|
|
||||||
for source_dataset in source_datasets:
|
for source_dataset in source_datasets:
|
||||||
@ -291,7 +303,7 @@ class ZfsAutobackup:
|
|||||||
else:
|
else:
|
||||||
dataset.verbose("Ignoring, already replicated")
|
dataset.verbose("Ignoring, already replicated")
|
||||||
|
|
||||||
return(ret)
|
return ret
|
||||||
|
|
||||||
def filter_properties_list(self):
|
def filter_properties_list(self):
|
||||||
|
|
||||||
@ -328,6 +340,9 @@ class ZfsAutobackup:
|
|||||||
self.set_title("Source settings")
|
self.set_title("Source settings")
|
||||||
|
|
||||||
description = "[Source]"
|
description = "[Source]"
|
||||||
|
if self.args.no_thinning:
|
||||||
|
source_thinner=None
|
||||||
|
else:
|
||||||
source_thinner = Thinner(self.args.keep_source)
|
source_thinner = Thinner(self.args.keep_source)
|
||||||
source_node = ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config,
|
source_node = ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config,
|
||||||
ssh_to=self.args.ssh_source, readonly=self.args.test,
|
ssh_to=self.args.ssh_source, readonly=self.args.test,
|
||||||
@ -359,6 +374,9 @@ class ZfsAutobackup:
|
|||||||
|
|
||||||
# create target_node
|
# create target_node
|
||||||
self.set_title("Target settings")
|
self.set_title("Target settings")
|
||||||
|
if self.args.no_thinning:
|
||||||
|
target_thinner=None
|
||||||
|
else:
|
||||||
target_thinner = Thinner(self.args.keep_target)
|
target_thinner = Thinner(self.args.keep_target)
|
||||||
target_node = ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config,
|
target_node = ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config,
|
||||||
ssh_to=self.args.ssh_target,
|
ssh_to=self.args.ssh_target,
|
||||||
@ -367,10 +385,7 @@ class ZfsAutobackup:
|
|||||||
thinner=target_thinner)
|
thinner=target_thinner)
|
||||||
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
||||||
|
|
||||||
if self.args.no_send:
|
self.set_title("Synchronising")
|
||||||
self.set_title("Thinning source and target")
|
|
||||||
else:
|
|
||||||
self.set_title("Sending and thinning")
|
|
||||||
|
|
||||||
# check if exists, to prevent vague errors
|
# check if exists, to prevent vague errors
|
||||||
target_dataset = ZfsDataset(target_node, self.args.target_path)
|
target_dataset = ZfsDataset(target_node, self.args.target_path)
|
||||||
@ -379,19 +394,20 @@ class ZfsAutobackup:
|
|||||||
"Target path '{}' does not exist. Please create this dataset first.".format(target_dataset)))
|
"Target path '{}' does not exist. Please create this dataset first.".format(target_dataset)))
|
||||||
|
|
||||||
# do the actual sync
|
# do the actual sync
|
||||||
|
# NOTE: even with no_send, no_thinning and no_snapshot it does a usefull thing because it checks if the common snapshots and shows incompatible snapshots
|
||||||
fail_count = self.sync_datasets(
|
fail_count = self.sync_datasets(
|
||||||
source_node=source_node,
|
source_node=source_node,
|
||||||
source_datasets=source_datasets,
|
source_datasets=source_datasets,
|
||||||
target_node=target_node)
|
target_node=target_node)
|
||||||
|
|
||||||
|
#no target specified, run in snapshot-only mode
|
||||||
else:
|
else:
|
||||||
if not self.args.no_thinning:
|
|
||||||
self.thin_source(source_datasets)
|
self.thin_source(source_datasets)
|
||||||
fail_count = 0
|
fail_count = 0
|
||||||
|
|
||||||
if not fail_count:
|
if not fail_count:
|
||||||
if self.args.test:
|
if self.args.test:
|
||||||
self.set_title("All tests successfull.")
|
self.set_title("All tests successful.")
|
||||||
else:
|
else:
|
||||||
self.set_title("All operations completed successfully")
|
self.set_title("All operations completed successfully")
|
||||||
if not self.args.target_path:
|
if not self.args.target_path:
|
||||||
@ -399,7 +415,7 @@ class ZfsAutobackup:
|
|||||||
|
|
||||||
else:
|
else:
|
||||||
if fail_count != 255:
|
if fail_count != 255:
|
||||||
self.error("{} failures!".format(fail_count))
|
self.error("{} dataset(s) failed!".format(fail_count))
|
||||||
|
|
||||||
if self.args.test:
|
if self.args.test:
|
||||||
self.verbose("")
|
self.verbose("")
|
||||||
|
|||||||
@ -6,10 +6,9 @@ from zfs_autobackup.CachedProperty import CachedProperty
|
|||||||
|
|
||||||
|
|
||||||
class ZfsDataset:
|
class ZfsDataset:
|
||||||
"""a zfs dataset (filesystem/volume/snapshot/clone)
|
"""a zfs dataset (filesystem/volume/snapshot/clone) Note that a dataset
|
||||||
Note that a dataset doesn't have to actually exist (yet/anymore)
|
doesn't have to actually exist (yet/anymore) Also most properties are cached
|
||||||
Also most properties are cached for performance-reasons, but also to allow --test to function correctly.
|
for performance-reasons, but also to allow --test to function correctly.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# illegal properties per dataset type. these will be removed from --set-properties and --filter-properties
|
# illegal properties per dataset type. these will be removed from --set-properties and --filter-properties
|
||||||
@ -19,8 +18,11 @@ class ZfsDataset:
|
|||||||
}
|
}
|
||||||
|
|
||||||
def __init__(self, zfs_node, name, force_exists=None):
|
def __init__(self, zfs_node, name, force_exists=None):
|
||||||
"""name: full path of the zfs dataset exists: specify if you already know a dataset exists or not. for
|
"""
|
||||||
performance and testing reasons. (otherwise it will have to check with zfs list when needed)
|
Args:
|
||||||
|
:type zfs_node: ZfsNode.ZfsNode
|
||||||
|
:type name: str
|
||||||
|
:type force_exists: bool
|
||||||
"""
|
"""
|
||||||
self.zfs_node = zfs_node
|
self.zfs_node = zfs_node
|
||||||
self.name = name # full name
|
self.name = name # full name
|
||||||
@ -41,12 +43,24 @@ class ZfsDataset:
|
|||||||
return self.name == obj.name
|
return self.name == obj.name
|
||||||
|
|
||||||
def verbose(self, txt):
|
def verbose(self, txt):
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
:type txt: str
|
||||||
|
"""
|
||||||
self.zfs_node.verbose("{}: {}".format(self.name, txt))
|
self.zfs_node.verbose("{}: {}".format(self.name, txt))
|
||||||
|
|
||||||
def error(self, txt):
|
def error(self, txt):
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
:type txt: str
|
||||||
|
"""
|
||||||
self.zfs_node.error("{}: {}".format(self.name, txt))
|
self.zfs_node.error("{}: {}".format(self.name, txt))
|
||||||
|
|
||||||
def debug(self, txt):
|
def debug(self, txt):
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
:type txt: str
|
||||||
|
"""
|
||||||
self.zfs_node.debug("{}: {}".format(self.name, txt))
|
self.zfs_node.debug("{}: {}".format(self.name, txt))
|
||||||
|
|
||||||
def invalidate(self):
|
def invalidate(self):
|
||||||
@ -60,11 +74,19 @@ class ZfsDataset:
|
|||||||
return self.name.split("/")
|
return self.name.split("/")
|
||||||
|
|
||||||
def lstrip_path(self, count):
|
def lstrip_path(self, count):
|
||||||
"""return name with first count components stripped"""
|
"""return name with first count components stripped
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type count: int
|
||||||
|
"""
|
||||||
return "/".join(self.split_path()[count:])
|
return "/".join(self.split_path()[count:])
|
||||||
|
|
||||||
def rstrip_path(self, count):
|
def rstrip_path(self, count):
|
||||||
"""return name with last count components stripped"""
|
"""return name with last count components stripped
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type count: int
|
||||||
|
"""
|
||||||
return "/".join(self.split_path()[:-count])
|
return "/".join(self.split_path()[:-count])
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@ -91,7 +113,15 @@ class ZfsDataset:
|
|||||||
return self.name.find("@") != -1
|
return self.name.find("@") != -1
|
||||||
|
|
||||||
def is_selected(self, value, source, inherited, ignore_received):
|
def is_selected(self, value, source, inherited, ignore_received):
|
||||||
"""determine if dataset should be selected for backup (called from ZfsNode)"""
|
"""determine if dataset should be selected for backup (called from
|
||||||
|
ZfsNode)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type value: str
|
||||||
|
:type source: str
|
||||||
|
:type inherited: bool
|
||||||
|
:type ignore_received: bool
|
||||||
|
"""
|
||||||
|
|
||||||
# sanity checks
|
# sanity checks
|
||||||
if source not in ["local", "received", "-"]:
|
if source not in ["local", "received", "-"]:
|
||||||
@ -121,8 +151,9 @@ class ZfsDataset:
|
|||||||
|
|
||||||
@CachedProperty
|
@CachedProperty
|
||||||
def parent(self):
|
def parent(self):
|
||||||
"""get zfs-parent of this dataset. for snapshots this means it will get the filesystem/volume that it belongs
|
"""get zfs-parent of this dataset. for snapshots this means it will get
|
||||||
to. otherwise it will return the parent according to path
|
the filesystem/volume that it belongs to. otherwise it will return the
|
||||||
|
parent according to path
|
||||||
|
|
||||||
we cache this so everything in the parent that is cached also stays.
|
we cache this so everything in the parent that is cached also stays.
|
||||||
"""
|
"""
|
||||||
@ -131,24 +162,35 @@ class ZfsDataset:
|
|||||||
else:
|
else:
|
||||||
return ZfsDataset(self.zfs_node, self.rstrip_path(1))
|
return ZfsDataset(self.zfs_node, self.rstrip_path(1))
|
||||||
|
|
||||||
def find_prev_snapshot(self, snapshot, also_other_snapshots=False):
|
# NOTE: unused for now
|
||||||
"""find previous snapshot in this dataset. None if it doesn't exist.
|
# def find_prev_snapshot(self, snapshot, also_other_snapshots=False):
|
||||||
|
# """find previous snapshot in this dataset. None if it doesn't exist.
|
||||||
also_other_snapshots: set to true to also return snapshots that where not created by us. (is_ours)
|
#
|
||||||
"""
|
# also_other_snapshots: set to true to also return snapshots that where
|
||||||
|
# not created by us. (is_ours)
|
||||||
if self.is_snapshot:
|
#
|
||||||
raise (Exception("Please call this on a dataset."))
|
# Args:
|
||||||
|
# :type snapshot: str or ZfsDataset.ZfsDataset
|
||||||
index = self.find_snapshot_index(snapshot)
|
# :type also_other_snapshots: bool
|
||||||
while index:
|
# """
|
||||||
index = index - 1
|
#
|
||||||
if also_other_snapshots or self.snapshots[index].is_ours():
|
# if self.is_snapshot:
|
||||||
return self.snapshots[index]
|
# raise (Exception("Please call this on a dataset."))
|
||||||
return None
|
#
|
||||||
|
# index = self.find_snapshot_index(snapshot)
|
||||||
|
# while index:
|
||||||
|
# index = index - 1
|
||||||
|
# if also_other_snapshots or self.snapshots[index].is_ours():
|
||||||
|
# return self.snapshots[index]
|
||||||
|
# return None
|
||||||
|
|
||||||
def find_next_snapshot(self, snapshot, also_other_snapshots=False):
|
def find_next_snapshot(self, snapshot, also_other_snapshots=False):
|
||||||
"""find next snapshot in this dataset. None if it doesn't exist"""
|
"""find next snapshot in this dataset. None if it doesn't exist
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type snapshot: ZfsDataset
|
||||||
|
:type also_other_snapshots: bool
|
||||||
|
"""
|
||||||
|
|
||||||
if self.is_snapshot:
|
if self.is_snapshot:
|
||||||
raise (Exception("Please call this on a dataset."))
|
raise (Exception("Please call this on a dataset."))
|
||||||
@ -162,8 +204,9 @@ class ZfsDataset:
|
|||||||
|
|
||||||
@CachedProperty
|
@CachedProperty
|
||||||
def exists(self):
|
def exists(self):
|
||||||
"""check if dataset exists.
|
"""check if dataset exists. Use force to force a specific value to be
|
||||||
Use force to force a specific value to be cached, if you already know. Useful for performance reasons"""
|
cached, if you already know. Useful for performance reasons
|
||||||
|
"""
|
||||||
|
|
||||||
if self.force_exists is not None:
|
if self.force_exists is not None:
|
||||||
self.debug("Checking if filesystem exists: was forced to {}".format(self.force_exists))
|
self.debug("Checking if filesystem exists: was forced to {}".format(self.force_exists))
|
||||||
@ -175,7 +218,11 @@ class ZfsDataset:
|
|||||||
hide_errors=True) and True)
|
hide_errors=True) and True)
|
||||||
|
|
||||||
def create_filesystem(self, parents=False):
|
def create_filesystem(self, parents=False):
|
||||||
"""create a filesystem"""
|
"""create a filesystem
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type parents: bool
|
||||||
|
"""
|
||||||
if parents:
|
if parents:
|
||||||
self.verbose("Creating filesystem and parents")
|
self.verbose("Creating filesystem and parents")
|
||||||
self.zfs_node.run(["zfs", "create", "-p", self.name])
|
self.zfs_node.run(["zfs", "create", "-p", self.name])
|
||||||
@ -186,7 +233,12 @@ class ZfsDataset:
|
|||||||
self.force_exists = True
|
self.force_exists = True
|
||||||
|
|
||||||
def destroy(self, fail_exception=False):
|
def destroy(self, fail_exception=False):
|
||||||
"""destroy the dataset. by default failures are not an exception, so we can continue making backups"""
|
"""destroy the dataset. by default failures are not an exception, so we
|
||||||
|
can continue making backups
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type fail_exception: bool
|
||||||
|
"""
|
||||||
|
|
||||||
self.verbose("Destroying")
|
self.verbose("Destroying")
|
||||||
|
|
||||||
@ -225,7 +277,11 @@ class ZfsDataset:
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
def is_changed(self, min_changed_bytes=1):
|
def is_changed(self, min_changed_bytes=1):
|
||||||
"""dataset is changed since ANY latest snapshot ?"""
|
"""dataset is changed since ANY latest snapshot ?
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type min_changed_bytes: int
|
||||||
|
"""
|
||||||
self.debug("Checking if dataset is changed")
|
self.debug("Checking if dataset is changed")
|
||||||
|
|
||||||
if min_changed_bytes == 0:
|
if min_changed_bytes == 0:
|
||||||
@ -272,7 +328,9 @@ class ZfsDataset:
|
|||||||
|
|
||||||
@property
|
@property
|
||||||
def timestamp(self):
|
def timestamp(self):
|
||||||
"""get timestamp from snapshot name. Only works for our own snapshots with the correct format."""
|
"""get timestamp from snapshot name. Only works for our own snapshots
|
||||||
|
with the correct format.
|
||||||
|
"""
|
||||||
time_str = re.findall("^.*-([0-9]*)$", self.snapshot_name)[0]
|
time_str = re.findall("^.*-([0-9]*)$", self.snapshot_name)[0]
|
||||||
if len(time_str) != 14:
|
if len(time_str) != 14:
|
||||||
raise (Exception("Snapshot has invalid timestamp in name: {}".format(self.snapshot_name)))
|
raise (Exception("Snapshot has invalid timestamp in name: {}".format(self.snapshot_name)))
|
||||||
@ -282,7 +340,11 @@ class ZfsDataset:
|
|||||||
return time_secs
|
return time_secs
|
||||||
|
|
||||||
def from_names(self, names):
|
def from_names(self, names):
|
||||||
"""convert a list of names to a list ZfsDatasets for this zfs_node"""
|
"""convert a list of names to a list ZfsDatasets for this zfs_node
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type names: list of str
|
||||||
|
"""
|
||||||
ret = []
|
ret = []
|
||||||
for name in names:
|
for name in names:
|
||||||
ret.append(ZfsDataset(self.zfs_node, name))
|
ret.append(ZfsDataset(self.zfs_node, name))
|
||||||
@ -328,7 +390,13 @@ class ZfsDataset:
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
def find_snapshot(self, snapshot):
|
def find_snapshot(self, snapshot):
|
||||||
"""find snapshot by snapshot (can be a snapshot_name or a different ZfsDataset )"""
|
"""find snapshot by snapshot (can be a snapshot_name or a different
|
||||||
|
ZfsDataset )
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:rtype: ZfsDataset
|
||||||
|
:type snapshot: str or ZfsDataset
|
||||||
|
"""
|
||||||
|
|
||||||
if not isinstance(snapshot, ZfsDataset):
|
if not isinstance(snapshot, ZfsDataset):
|
||||||
snapshot_name = snapshot
|
snapshot_name = snapshot
|
||||||
@ -342,7 +410,12 @@ class ZfsDataset:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
def find_snapshot_index(self, snapshot):
|
def find_snapshot_index(self, snapshot):
|
||||||
"""find snapshot index by snapshot (can be a snapshot_name or ZfsDataset)"""
|
"""find snapshot index by snapshot (can be a snapshot_name or
|
||||||
|
ZfsDataset)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type snapshot: str or ZfsDataset
|
||||||
|
"""
|
||||||
|
|
||||||
if not isinstance(snapshot, ZfsDataset):
|
if not isinstance(snapshot, ZfsDataset):
|
||||||
snapshot_name = snapshot
|
snapshot_name = snapshot
|
||||||
@ -371,7 +444,11 @@ class ZfsDataset:
|
|||||||
return int(output[0])
|
return int(output[0])
|
||||||
|
|
||||||
def is_changed_ours(self, min_changed_bytes=1):
|
def is_changed_ours(self, min_changed_bytes=1):
|
||||||
"""dataset is changed since OUR latest snapshot?"""
|
"""dataset is changed since OUR latest snapshot?
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type min_changed_bytes: int
|
||||||
|
"""
|
||||||
|
|
||||||
if min_changed_bytes == 0:
|
if min_changed_bytes == 0:
|
||||||
return True
|
return True
|
||||||
@ -387,7 +464,11 @@ class ZfsDataset:
|
|||||||
|
|
||||||
@CachedProperty
|
@CachedProperty
|
||||||
def recursive_datasets(self, types="filesystem,volume"):
|
def recursive_datasets(self, types="filesystem,volume"):
|
||||||
"""get all (non-snapshot) datasets recursively under us"""
|
"""get all (non-snapshot) datasets recursively under us
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type types: str
|
||||||
|
"""
|
||||||
|
|
||||||
self.debug("Getting all recursive datasets under us")
|
self.debug("Getting all recursive datasets under us")
|
||||||
|
|
||||||
@ -399,7 +480,11 @@ class ZfsDataset:
|
|||||||
|
|
||||||
@CachedProperty
|
@CachedProperty
|
||||||
def datasets(self, types="filesystem,volume"):
|
def datasets(self, types="filesystem,volume"):
|
||||||
"""get all (non-snapshot) datasets directly under us"""
|
"""get all (non-snapshot) datasets directly under us
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type types: str
|
||||||
|
"""
|
||||||
|
|
||||||
self.debug("Getting all datasets under us")
|
self.debug("Getting all datasets under us")
|
||||||
|
|
||||||
@ -409,11 +494,19 @@ class ZfsDataset:
|
|||||||
|
|
||||||
return self.from_names(names[1:])
|
return self.from_names(names[1:])
|
||||||
|
|
||||||
def send_pipe(self, features, prev_snapshot=None, resume_token=None, show_progress=False, raw=False):
|
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, output_pipes):
|
||||||
"""returns a pipe with zfs send output for this snapshot
|
"""returns a pipe with zfs send output for this snapshot
|
||||||
|
|
||||||
resume_token: resume sending from this token. (in that case we don't need to know snapshot names)
|
resume_token: resume sending from this token. (in that case we don't
|
||||||
|
need to know snapshot names)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type output_pipes: list of str
|
||||||
|
:type features: list of str
|
||||||
|
:type prev_snapshot: ZfsDataset
|
||||||
|
:type resume_token: str
|
||||||
|
:type show_progress: bool
|
||||||
|
:type raw: bool
|
||||||
"""
|
"""
|
||||||
# build source command
|
# build source command
|
||||||
cmd = []
|
cmd = []
|
||||||
@ -422,28 +515,22 @@ class ZfsDataset:
|
|||||||
|
|
||||||
# all kind of performance options:
|
# all kind of performance options:
|
||||||
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
|
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
|
||||||
cmd.append("-L") # large block support (only if recordsize>128k which is seldomly used)
|
cmd.append("--large-block") # large block support (only if recordsize>128k which is seldomly used)
|
||||||
|
|
||||||
if 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
|
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
|
||||||
cmd.append("-e") # WRITE_EMBEDDED, more compact stream
|
cmd.append("--embed") # WRITE_EMBEDDED, more compact stream
|
||||||
|
|
||||||
if "-c" in self.zfs_node.supported_send_options:
|
if "-c" in self.zfs_node.supported_send_options:
|
||||||
cmd.append("-c") # use compressed WRITE records
|
cmd.append("--compressed") # use compressed WRITE records
|
||||||
|
|
||||||
# NOTE: performance is usually worse with this option, according to manual
|
# raw? (send over encrypted data in its original encrypted form without decrypting)
|
||||||
# also -D will be depricated in newer ZFS versions
|
|
||||||
# if not resume:
|
|
||||||
# if "-D" in self.zfs_node.supported_send_options:
|
|
||||||
# cmd.append("-D") # dedupped stream, sends less duplicate data
|
|
||||||
|
|
||||||
# raw? (for encryption)
|
|
||||||
if raw:
|
if raw:
|
||||||
cmd.append("--raw")
|
cmd.append("--raw")
|
||||||
|
|
||||||
# progress output
|
# progress output
|
||||||
if show_progress:
|
if show_progress:
|
||||||
cmd.append("-v")
|
cmd.append("--verbose")
|
||||||
cmd.append("-P")
|
cmd.append("--parsable")
|
||||||
|
|
||||||
# resume a previous send? (don't need more parameters in that case)
|
# resume a previous send? (don't need more parameters in that case)
|
||||||
if resume_token:
|
if resume_token:
|
||||||
@ -451,7 +538,8 @@ class ZfsDataset:
|
|||||||
|
|
||||||
else:
|
else:
|
||||||
# send properties
|
# send properties
|
||||||
cmd.append("-p")
|
if send_properties:
|
||||||
|
cmd.append("--props")
|
||||||
|
|
||||||
# incremental?
|
# incremental?
|
||||||
if prev_snapshot:
|
if prev_snapshot:
|
||||||
@ -459,14 +547,36 @@ class ZfsDataset:
|
|||||||
|
|
||||||
cmd.append(self.name)
|
cmd.append(self.name)
|
||||||
|
|
||||||
# NOTE: this doesn't start the send yet, it only returns a subprocess.Pipe
|
# #add custom output pipes?
|
||||||
return self.zfs_node.run(cmd, pipe=True)
|
# if output_pipes:
|
||||||
|
# #local so do our own piping
|
||||||
|
# if self.zfs_node.is_local():
|
||||||
|
# output_pipe = self.zfs_node.run(cmd)
|
||||||
|
# for pipe_cmd in output_pipes:
|
||||||
|
# output_pipe=self.zfs_node.run(pipe_cmd, inp=output_pipe, )
|
||||||
|
# return output_pipe
|
||||||
|
# #remote, so add with actual | and let remote shell handle it
|
||||||
|
# else:
|
||||||
|
# for pipe_cmd in output_pipes:
|
||||||
|
# cmd.append("|")
|
||||||
|
# cmd.extend(pipe_cmd)
|
||||||
|
|
||||||
|
return self.zfs_node.run(cmd, pipe=True, readonly=True)
|
||||||
|
|
||||||
|
|
||||||
def recv_pipe(self, pipe, features, filter_properties=None, set_properties=None, ignore_exit_code=False):
|
def recv_pipe(self, pipe, features, filter_properties=None, set_properties=None, ignore_exit_code=False):
|
||||||
"""starts a zfs recv for this snapshot and uses pipe as input
|
"""starts a zfs recv for this snapshot and uses pipe as input
|
||||||
|
|
||||||
note: you can it both on a snapshot or filesystem object.
|
note: you can it both on a snapshot or filesystem object. The
|
||||||
The resulting zfs command is the same, only our object cache is invalidated differently.
|
resulting zfs command is the same, only our object cache is invalidated
|
||||||
|
differently.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type pipe: subprocess.pOpen
|
||||||
|
:type features: list of str
|
||||||
|
:type filter_properties: list of str
|
||||||
|
:type set_properties: list of str
|
||||||
|
:type ignore_exit_code: bool
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if set_properties is None:
|
if set_properties is None:
|
||||||
@ -520,12 +630,26 @@ class ZfsDataset:
|
|||||||
self.error("error during transfer")
|
self.error("error during transfer")
|
||||||
raise (Exception("Target doesn't exist after transfer, something went wrong."))
|
raise (Exception("Target doesn't exist after transfer, something went wrong."))
|
||||||
|
|
||||||
def transfer_snapshot(self, target_snapshot, features, prev_snapshot=None, show_progress=False,
|
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
|
||||||
filter_properties=None, set_properties=None, ignore_recv_exit_code=False, resume_token=None,
|
filter_properties, set_properties, ignore_recv_exit_code, resume_token,
|
||||||
raw=False):
|
raw, send_properties, write_embedded, output_pipes, input_pipes):
|
||||||
"""transfer this snapshot to target_snapshot. specify prev_snapshot for incremental transfer
|
"""transfer this snapshot to target_snapshot. specify prev_snapshot for
|
||||||
|
incremental transfer
|
||||||
|
|
||||||
connects a send_pipe() to recv_pipe()
|
connects a send_pipe() to recv_pipe()
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type output_pipes: list of str
|
||||||
|
:type input_pipes: list of str
|
||||||
|
:type target_snapshot: ZfsDataset
|
||||||
|
:type features: list of str
|
||||||
|
:type prev_snapshot: ZfsDataset
|
||||||
|
:type show_progress: bool
|
||||||
|
:type filter_properties: list of str
|
||||||
|
:type set_properties: list of str
|
||||||
|
:type ignore_recv_exit_code: bool
|
||||||
|
:type resume_token: str
|
||||||
|
:type raw: bool
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if set_properties is None:
|
if set_properties is None:
|
||||||
@ -547,7 +671,7 @@ class ZfsDataset:
|
|||||||
|
|
||||||
# do it
|
# do it
|
||||||
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
|
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
|
||||||
resume_token=resume_token, raw=raw)
|
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, output_pipes=output_pipes)
|
||||||
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
|
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
|
||||||
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
|
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
|
||||||
|
|
||||||
@ -565,7 +689,12 @@ class ZfsDataset:
|
|||||||
return
|
return
|
||||||
|
|
||||||
def get_resume_snapshot(self, resume_token):
|
def get_resume_snapshot(self, resume_token):
|
||||||
"""returns snapshot that will be resumed by this resume token (run this on source with target-token)"""
|
"""returns snapshot that will be resumed by this resume token (run this
|
||||||
|
on source with target-token)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type resume_token: str
|
||||||
|
"""
|
||||||
# use zfs send -n option to determine this
|
# use zfs send -n option to determine this
|
||||||
# NOTE: on smartos stderr, on linux stdout
|
# NOTE: on smartos stderr, on linux stdout
|
||||||
(stdout, stderr) = self.zfs_node.run(["zfs", "send", "-t", resume_token, "-n", "-v"], valid_exitcodes=[0, 255],
|
(stdout, stderr) = self.zfs_node.run(["zfs", "send", "-t", resume_token, "-n", "-v"], valid_exitcodes=[0, 255],
|
||||||
@ -585,11 +714,16 @@ class ZfsDataset:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
def thin_list(self, keeps=None, ignores=None):
|
def thin_list(self, keeps=None, ignores=None):
|
||||||
"""determines list of snapshots that should be kept or deleted based on the thinning schedule. cull the herd!
|
"""determines list of snapshots that should be kept or deleted based on
|
||||||
keep: list of snapshots to always keep (usually the last) ignores: snapshots to completely ignore (usually
|
the thinning schedule. cull the herd!
|
||||||
incompatible target snapshots that are going to be destroyed anyway)
|
|
||||||
|
|
||||||
returns: ( keeps, obsoletes )
|
returns: ( keeps, obsoletes )
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:param keeps: list of snapshots to always keep (usually the last)
|
||||||
|
:param ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
|
||||||
|
:type keeps: list of ZfsDataset
|
||||||
|
:type ignores: list of ZfsDataset
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if ignores is None:
|
if ignores is None:
|
||||||
@ -599,10 +733,14 @@ class ZfsDataset:
|
|||||||
|
|
||||||
snapshots = [snapshot for snapshot in self.our_snapshots if snapshot not in ignores]
|
snapshots = [snapshot for snapshot in self.our_snapshots if snapshot not in ignores]
|
||||||
|
|
||||||
return self.zfs_node.thinner.thin(snapshots, keep_objects=keeps)
|
return self.zfs_node.thin(snapshots, keep_objects=keeps)
|
||||||
|
|
||||||
def thin(self, skip_holds=False):
|
def thin(self, skip_holds=False):
|
||||||
"""destroys snapshots according to thin_list, except last snapshot"""
|
"""destroys snapshots according to thin_list, except last snapshot
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type skip_holds: bool
|
||||||
|
"""
|
||||||
|
|
||||||
(keeps, obsoletes) = self.thin_list(keeps=self.our_snapshots[-1:])
|
(keeps, obsoletes) = self.thin_list(keeps=self.our_snapshots[-1:])
|
||||||
for obsolete in obsoletes:
|
for obsolete in obsoletes:
|
||||||
@ -613,8 +751,11 @@ class ZfsDataset:
|
|||||||
self.snapshots.remove(obsolete)
|
self.snapshots.remove(obsolete)
|
||||||
|
|
||||||
def find_common_snapshot(self, target_dataset):
|
def find_common_snapshot(self, target_dataset):
|
||||||
"""find latest common snapshot between us and target
|
"""find latest common snapshot between us and target returns None if its
|
||||||
returns None if its an initial transfer
|
an initial transfer
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type target_dataset: ZfsDataset
|
||||||
"""
|
"""
|
||||||
if not target_dataset.snapshots:
|
if not target_dataset.snapshots:
|
||||||
# target has nothing yet
|
# target has nothing yet
|
||||||
@ -632,8 +773,12 @@ class ZfsDataset:
|
|||||||
raise (Exception("You probably need to delete the target dataset to fix this."))
|
raise (Exception("You probably need to delete the target dataset to fix this."))
|
||||||
|
|
||||||
def find_start_snapshot(self, common_snapshot, also_other_snapshots):
|
def find_start_snapshot(self, common_snapshot, also_other_snapshots):
|
||||||
"""finds first snapshot to send
|
"""finds first snapshot to send :rtype: ZfsDataset or None if we cant
|
||||||
:rtype: ZfsDataset or None if we cant find it.
|
find it.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type common_snapshot: ZfsDataset
|
||||||
|
:type also_other_snapshots: bool
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if not common_snapshot:
|
if not common_snapshot:
|
||||||
@ -653,8 +798,13 @@ class ZfsDataset:
|
|||||||
return start_snapshot
|
return start_snapshot
|
||||||
|
|
||||||
def find_incompatible_snapshots(self, common_snapshot):
|
def find_incompatible_snapshots(self, common_snapshot):
|
||||||
"""returns a list of snapshots that is incompatible for a zfs recv onto the common_snapshot.
|
"""returns a list of snapshots that is incompatible for a zfs recv onto
|
||||||
all direct followup snapshots with written=0 are compatible."""
|
the common_snapshot. all direct followup snapshots with written=0 are
|
||||||
|
compatible.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type common_snapshot: ZfsDataset
|
||||||
|
"""
|
||||||
|
|
||||||
ret = []
|
ret = []
|
||||||
|
|
||||||
@ -668,7 +818,12 @@ class ZfsDataset:
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
def get_allowed_properties(self, filter_properties, set_properties):
|
def get_allowed_properties(self, filter_properties, set_properties):
|
||||||
"""only returns lists of allowed properties for this dataset type"""
|
"""only returns lists of allowed properties for this dataset type
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type filter_properties: list of str
|
||||||
|
:type set_properties: list of str
|
||||||
|
"""
|
||||||
|
|
||||||
allowed_filter_properties = []
|
allowed_filter_properties = []
|
||||||
allowed_set_properties = []
|
allowed_set_properties = []
|
||||||
@ -685,7 +840,14 @@ class ZfsDataset:
|
|||||||
return allowed_filter_properties, allowed_set_properties
|
return allowed_filter_properties, allowed_set_properties
|
||||||
|
|
||||||
def _add_virtual_snapshots(self, source_dataset, source_start_snapshot, also_other_snapshots):
|
def _add_virtual_snapshots(self, source_dataset, source_start_snapshot, also_other_snapshots):
|
||||||
"""add snapshots from source to our snapshot list. (just the in memory list, no disk operations)"""
|
"""add snapshots from source to our snapshot list. (just the in memory
|
||||||
|
list, no disk operations)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type source_dataset: ZfsDataset
|
||||||
|
:type source_start_snapshot: ZfsDataset
|
||||||
|
:type also_other_snapshots: bool
|
||||||
|
"""
|
||||||
|
|
||||||
self.debug("Creating virtual target snapshots")
|
self.debug("Creating virtual target snapshots")
|
||||||
snapshot = source_start_snapshot
|
snapshot = source_start_snapshot
|
||||||
@ -699,7 +861,15 @@ class ZfsDataset:
|
|||||||
snapshot = source_dataset.find_next_snapshot(snapshot, also_other_snapshots)
|
snapshot = source_dataset.find_next_snapshot(snapshot, also_other_snapshots)
|
||||||
|
|
||||||
def _pre_clean(self, common_snapshot, target_dataset, source_obsoletes, target_obsoletes, target_keeps):
|
def _pre_clean(self, common_snapshot, target_dataset, source_obsoletes, target_obsoletes, target_keeps):
|
||||||
"""cleanup old stuff before starting snapshot syncing"""
|
"""cleanup old stuff before starting snapshot syncing
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type common_snapshot: ZfsDataset
|
||||||
|
:type target_dataset: ZfsDataset
|
||||||
|
:type source_obsoletes: list of ZfsDataset
|
||||||
|
:type target_obsoletes: list of ZfsDataset
|
||||||
|
:type target_keeps: list of ZfsDataset
|
||||||
|
"""
|
||||||
|
|
||||||
# on source: destroy all obsoletes before common.
|
# on source: destroy all obsoletes before common.
|
||||||
# But after common, only delete snapshots that target also doesn't want
|
# But after common, only delete snapshots that target also doesn't want
|
||||||
@ -721,7 +891,12 @@ class ZfsDataset:
|
|||||||
target_snapshot.destroy()
|
target_snapshot.destroy()
|
||||||
|
|
||||||
def _validate_resume_token(self, target_dataset, start_snapshot):
|
def _validate_resume_token(self, target_dataset, start_snapshot):
|
||||||
"""validate and get (or destory) resume token"""
|
"""validate and get (or destory) resume token
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type target_dataset: ZfsDataset
|
||||||
|
:type start_snapshot: ZfsDataset
|
||||||
|
"""
|
||||||
|
|
||||||
if 'receive_resume_token' in target_dataset.properties:
|
if 'receive_resume_token' in target_dataset.properties:
|
||||||
resume_token = target_dataset.properties['receive_resume_token']
|
resume_token = target_dataset.properties['receive_resume_token']
|
||||||
@ -734,7 +909,13 @@ class ZfsDataset:
|
|||||||
return resume_token
|
return resume_token
|
||||||
|
|
||||||
def _plan_sync(self, target_dataset, also_other_snapshots):
|
def _plan_sync(self, target_dataset, also_other_snapshots):
|
||||||
"""plan where to start syncing and what to sync and what to keep"""
|
"""plan where to start syncing and what to sync and what to keep
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:rtype: ( ZfsDataset, ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset )
|
||||||
|
:type target_dataset: ZfsDataset
|
||||||
|
:type also_other_snapshots: bool
|
||||||
|
"""
|
||||||
|
|
||||||
# determine common and start snapshot
|
# determine common and start snapshot
|
||||||
target_dataset.debug("Determining start snapshot")
|
target_dataset.debug("Determining start snapshot")
|
||||||
@ -758,7 +939,13 @@ class ZfsDataset:
|
|||||||
return common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps, incompatible_target_snapshots
|
return common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps, incompatible_target_snapshots
|
||||||
|
|
||||||
def handle_incompatible_snapshots(self, incompatible_target_snapshots, destroy_incompatible):
|
def handle_incompatible_snapshots(self, incompatible_target_snapshots, destroy_incompatible):
|
||||||
"""destroy incompatbile snapshots on target before sync, or inform user what to do"""
|
"""destroy incompatbile snapshots on target before sync, or inform user
|
||||||
|
what to do
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type incompatible_target_snapshots: list of ZfsDataset
|
||||||
|
:type destroy_incompatible: bool
|
||||||
|
"""
|
||||||
|
|
||||||
if incompatible_target_snapshots:
|
if incompatible_target_snapshots:
|
||||||
if not destroy_incompatible:
|
if not destroy_incompatible:
|
||||||
@ -771,10 +958,30 @@ class ZfsDataset:
|
|||||||
snapshot.destroy()
|
snapshot.destroy()
|
||||||
self.snapshots.remove(snapshot)
|
self.snapshots.remove(snapshot)
|
||||||
|
|
||||||
|
|
||||||
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
|
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
|
||||||
ignore_recv_exit_code, holds, rollback, raw, also_other_snapshots,
|
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
|
||||||
no_send, destroy_incompatible, no_thinning):
|
no_send, destroy_incompatible, output_pipes, input_pipes):
|
||||||
"""sync this dataset's snapshots to target_dataset, while also thinning out old snapshots along the way."""
|
"""sync this dataset's snapshots to target_dataset, while also thinning
|
||||||
|
out old snapshots along the way.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
:type output_pipes: list of str
|
||||||
|
:type input_pipes: list of str
|
||||||
|
:type target_dataset: ZfsDataset
|
||||||
|
:type features: list of str
|
||||||
|
:type show_progress: bool
|
||||||
|
:type filter_properties: list of str
|
||||||
|
:type set_properties: list of str
|
||||||
|
:type ignore_recv_exit_code: bool
|
||||||
|
:type holds: bool
|
||||||
|
:type rollback: bool
|
||||||
|
:type raw: bool
|
||||||
|
:type decrypt: bool
|
||||||
|
:type also_other_snapshots: bool
|
||||||
|
:type no_send: bool
|
||||||
|
:type destroy_incompatible: bool
|
||||||
|
"""
|
||||||
|
|
||||||
(common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps,
|
(common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps,
|
||||||
incompatible_target_snapshots) = \
|
incompatible_target_snapshots) = \
|
||||||
@ -782,11 +989,13 @@ class ZfsDataset:
|
|||||||
|
|
||||||
# NOTE: we do this because we dont want filesystems to fillup when backups keep failing.
|
# NOTE: we do this because we dont want filesystems to fillup when backups keep failing.
|
||||||
# Also usefull with no_send to still cleanup stuff.
|
# Also usefull with no_send to still cleanup stuff.
|
||||||
if not no_thinning:
|
|
||||||
self._pre_clean(
|
self._pre_clean(
|
||||||
common_snapshot=common_snapshot, target_dataset=target_dataset,
|
common_snapshot=common_snapshot, target_dataset=target_dataset,
|
||||||
target_keeps=target_keeps, target_obsoletes=target_obsoletes, source_obsoletes=source_obsoletes)
|
target_keeps=target_keeps, target_obsoletes=target_obsoletes, source_obsoletes=source_obsoletes)
|
||||||
|
|
||||||
|
# handle incompatible stuff on target
|
||||||
|
target_dataset.handle_incompatible_snapshots(incompatible_target_snapshots, destroy_incompatible)
|
||||||
|
|
||||||
# now actually transfer the snapshots, if we want
|
# now actually transfer the snapshots, if we want
|
||||||
if no_send:
|
if no_send:
|
||||||
return
|
return
|
||||||
@ -794,13 +1003,34 @@ class ZfsDataset:
|
|||||||
# check if we can resume
|
# check if we can resume
|
||||||
resume_token = self._validate_resume_token(target_dataset, start_snapshot)
|
resume_token = self._validate_resume_token(target_dataset, start_snapshot)
|
||||||
|
|
||||||
# handle incompatible stuff on target
|
|
||||||
target_dataset.handle_incompatible_snapshots(incompatible_target_snapshots, destroy_incompatible)
|
|
||||||
|
|
||||||
# rollback target to latest?
|
# rollback target to latest?
|
||||||
if rollback:
|
if rollback:
|
||||||
target_dataset.rollback()
|
target_dataset.rollback()
|
||||||
|
|
||||||
|
#defaults for these settings if there is no encryption stuff going on:
|
||||||
|
send_properties = True
|
||||||
|
raw = False
|
||||||
|
write_embedded = True
|
||||||
|
|
||||||
|
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties, set_properties)
|
||||||
|
|
||||||
|
# source dataset encrypted?
|
||||||
|
if self.properties.get('encryption', 'off')!='off':
|
||||||
|
# user wants to send it over decrypted?
|
||||||
|
if decrypt:
|
||||||
|
# when decrypting, zfs cant send properties
|
||||||
|
send_properties=False
|
||||||
|
else:
|
||||||
|
# keep data encrypted by sending it raw (including properties)
|
||||||
|
raw=True
|
||||||
|
|
||||||
|
# encrypt at target?
|
||||||
|
if encrypt and not raw:
|
||||||
|
# filter out encryption properties to let encryption on the target take place
|
||||||
|
active_filter_properties.extend(["keylocation","pbkdf2iters","keyformat", "encryption"])
|
||||||
|
write_embedded=False
|
||||||
|
|
||||||
|
|
||||||
# now actually transfer the snapshots
|
# now actually transfer the snapshots
|
||||||
prev_source_snapshot = common_snapshot
|
prev_source_snapshot = common_snapshot
|
||||||
source_snapshot = start_snapshot
|
source_snapshot = start_snapshot
|
||||||
@ -809,15 +1039,14 @@ class ZfsDataset:
|
|||||||
|
|
||||||
# does target actually want it?
|
# does target actually want it?
|
||||||
if target_snapshot not in target_obsoletes:
|
if target_snapshot not in target_obsoletes:
|
||||||
# NOTE: should we let transfer_snapshot handle this?
|
|
||||||
(allowed_filter_properties, allowed_set_properties) = self.get_allowed_properties(filter_properties,
|
|
||||||
set_properties)
|
|
||||||
source_snapshot.transfer_snapshot(target_snapshot, features=features,
|
source_snapshot.transfer_snapshot(target_snapshot, features=features,
|
||||||
prev_snapshot=prev_source_snapshot, show_progress=show_progress,
|
prev_snapshot=prev_source_snapshot, show_progress=show_progress,
|
||||||
filter_properties=allowed_filter_properties,
|
filter_properties=active_filter_properties,
|
||||||
set_properties=allowed_set_properties,
|
set_properties=active_set_properties,
|
||||||
ignore_recv_exit_code=ignore_recv_exit_code,
|
ignore_recv_exit_code=ignore_recv_exit_code,
|
||||||
resume_token=resume_token, raw=raw)
|
resume_token=resume_token, write_embedded=write_embedded,raw=raw, send_properties=send_properties, output_pipes=output_pipes, input_pipes=input_pipes)
|
||||||
|
|
||||||
resume_token = None
|
resume_token = None
|
||||||
|
|
||||||
# hold the new common snapshots and release the previous ones
|
# hold the new common snapshots and release the previous ones
|
||||||
@ -830,7 +1059,6 @@ class ZfsDataset:
|
|||||||
prev_source_snapshot.release()
|
prev_source_snapshot.release()
|
||||||
target_dataset.find_snapshot(prev_source_snapshot).release()
|
target_dataset.find_snapshot(prev_source_snapshot).release()
|
||||||
|
|
||||||
if not no_thinning:
|
|
||||||
# we may now destroy the previous source snapshot if its obsolete
|
# we may now destroy the previous source snapshot if its obsolete
|
||||||
if prev_source_snapshot in source_obsoletes:
|
if prev_source_snapshot in source_obsoletes:
|
||||||
prev_source_snapshot.destroy()
|
prev_source_snapshot.destroy()
|
||||||
|
|||||||
@ -16,11 +16,8 @@ class ZfsNode(ExecuteNode):
|
|||||||
"""a node that contains zfs datasets. implements global (systemwide/pool wide) zfs commands"""
|
"""a node that contains zfs datasets. implements global (systemwide/pool wide) zfs commands"""
|
||||||
|
|
||||||
def __init__(self, backup_name, logger, ssh_config=None, ssh_to=None, readonly=False, description="",
|
def __init__(self, backup_name, logger, ssh_config=None, ssh_to=None, readonly=False, description="",
|
||||||
debug_output=False, thinner=Thinner()):
|
debug_output=False, thinner=None):
|
||||||
self.backup_name = backup_name
|
self.backup_name = backup_name
|
||||||
if not description and ssh_to:
|
|
||||||
self.description = ssh_to
|
|
||||||
else:
|
|
||||||
self.description = description
|
self.description = description
|
||||||
|
|
||||||
self.logger = logger
|
self.logger = logger
|
||||||
@ -33,6 +30,7 @@ class ZfsNode(ExecuteNode):
|
|||||||
else:
|
else:
|
||||||
self.verbose("Datasets are local")
|
self.verbose("Datasets are local")
|
||||||
|
|
||||||
|
if thinner is not None:
|
||||||
rules = thinner.human_rules()
|
rules = thinner.human_rules()
|
||||||
if rules:
|
if rules:
|
||||||
for rule in rules:
|
for rule in rules:
|
||||||
@ -40,7 +38,7 @@ class ZfsNode(ExecuteNode):
|
|||||||
else:
|
else:
|
||||||
self.verbose("Keep no old snaphots")
|
self.verbose("Keep no old snaphots")
|
||||||
|
|
||||||
self.thinner = thinner
|
self.__thinner = thinner
|
||||||
|
|
||||||
# list of ZfsPools
|
# list of ZfsPools
|
||||||
self.__pools = {}
|
self.__pools = {}
|
||||||
@ -50,6 +48,12 @@ class ZfsNode(ExecuteNode):
|
|||||||
|
|
||||||
ExecuteNode.__init__(self, ssh_config=ssh_config, ssh_to=ssh_to, readonly=readonly, debug_output=debug_output)
|
ExecuteNode.__init__(self, ssh_config=ssh_config, ssh_to=ssh_to, readonly=readonly, debug_output=debug_output)
|
||||||
|
|
||||||
|
def thin(self, objects, keep_objects):
|
||||||
|
if self.__thinner is not None:
|
||||||
|
return self.__thinner.thin(objects, keep_objects)
|
||||||
|
else:
|
||||||
|
return ( keep_objects, [] )
|
||||||
|
|
||||||
@CachedProperty
|
@CachedProperty
|
||||||
def supported_send_options(self):
|
def supported_send_options(self):
|
||||||
"""list of supported options, for optimizing sends"""
|
"""list of supported options, for optimizing sends"""
|
||||||
@ -135,8 +139,8 @@ class ZfsNode(ExecuteNode):
|
|||||||
else:
|
else:
|
||||||
self.error(prefix + line.rstrip())
|
self.error(prefix + line.rstrip())
|
||||||
|
|
||||||
def _parse_stderr_pipe(self, line, hide_errors):
|
# def _parse_stderr_pipe(self, line, hide_errors):
|
||||||
self.parse_zfs_progress(line, hide_errors, "STDERR|> ")
|
# self.parse_zfs_progress(line, hide_errors, "STDERR|> ")
|
||||||
|
|
||||||
def _parse_stderr(self, line, hide_errors):
|
def _parse_stderr(self, line, hide_errors):
|
||||||
self.parse_zfs_progress(line, hide_errors, "STDERR > ")
|
self.parse_zfs_progress(line, hide_errors, "STDERR > ")
|
||||||
|
|||||||
@ -45,7 +45,6 @@ class ZfsPool():
|
|||||||
ret = {}
|
ret = {}
|
||||||
|
|
||||||
for pair in self.zfs_node.run(tab_split=True, cmd=cmd, readonly=True, valid_exitcodes=[0]):
|
for pair in self.zfs_node.run(tab_split=True, cmd=cmd, readonly=True, valid_exitcodes=[0]):
|
||||||
if len(pair) == 4:
|
|
||||||
ret[pair[1]] = pair[2]
|
ret[pair[1]] = pair[2]
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|||||||
Reference in New Issue
Block a user