Compare commits
74 Commits
v3.0.1-bet
...
v3.1-beta6
| Author | SHA1 | Date | |
|---|---|---|---|
| ea9012e476 | |||
| 97e3c110b3 | |||
| 9264e8de6d | |||
| 830ccf1bd4 | |||
| a389e4c81c | |||
| 36a66fbafc | |||
| b70c9986c7 | |||
| 664ea32c96 | |||
| 30f30babea | |||
| 5e04aabf37 | |||
| 59d53e9664 | |||
| 171f0ac5ad | |||
| 0ce3bf1297 | |||
| c682665888 | |||
| 086cfe570b | |||
| 521d1078bd | |||
| 8ea178af1f | |||
| 3e39e1553e | |||
| f0cc2bca2a | |||
| 59b0c23a20 | |||
| 401a3f73cc | |||
| 8ec5ed2f4f | |||
| 8318b2f9bf | |||
| 72b97ab2e8 | |||
| a16a038f0e | |||
| fc0da9d380 | |||
| 31be12c0bf | |||
| 176f04b302 | |||
| 7696d8c16d | |||
| 190a73ec10 | |||
| 2bf015e127 | |||
| 671eda7386 | |||
| 3d4b26cec3 | |||
| c0ea311e18 | |||
| b7b2723b2e | |||
| ec1d3ff93e | |||
| 352d5e6094 | |||
| 488ff6f551 | |||
| f52b8bbf58 | |||
| e47d461999 | |||
| a920744b1e | |||
| 63f423a201 | |||
| db6523f3c0 | |||
| 6b172dce2d | |||
| 85d493469d | |||
| bef3be4955 | |||
| f9719ba87e | |||
| 4b97f789df | |||
| ed7cd41ad7 | |||
| 62e19d97c2 | |||
| 594a2664c4 | |||
| d8fbc96be6 | |||
| 61bb590112 | |||
| 86ea5e49f4 | |||
| 01642365c7 | |||
| 4910b1dfb5 | |||
| 966df73d2f | |||
| 69ed827c0d | |||
| e79f6ac157 | |||
| 59efd070a1 | |||
| 80c1bdad1c | |||
| cf72de7c28 | |||
| 686bb48bda | |||
| 6a48b8a2a9 | |||
| 477b66c342 | |||
| a4155f970e | |||
| 0c9d14bf32 | |||
| 1f5955ccec | |||
| 1b94a849db | |||
| 98c40c6df5 | |||
| b479ab9c98 | |||
| a0fb205e75 | |||
| d3ce222921 | |||
| 36e134eb75 |
6
.github/workflows/regression.yml
vendored
6
.github/workflows/regression.yml
vendored
@ -17,7 +17,7 @@ jobs:
|
||||
|
||||
|
||||
- name: Prepare
|
||||
run: sudo apt update && sudo apt install zfsutils-linux && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
|
||||
run: sudo apt update && sudo apt install zfsutils-linux lzop pigz zstd gzip xz-utils lz4 && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
|
||||
|
||||
|
||||
- name: Regression test
|
||||
@ -39,7 +39,7 @@ jobs:
|
||||
|
||||
|
||||
- name: Prepare
|
||||
run: sudo apt update && sudo apt install zfsutils-linux python3-setuptools && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
|
||||
run: sudo apt update && sudo apt install zfsutils-linux python3-setuptools lzop pigz zstd gzip xz-utils liblz4-tool && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
|
||||
|
||||
|
||||
- name: Regression test
|
||||
@ -64,7 +64,7 @@ jobs:
|
||||
python-version: '2.x'
|
||||
|
||||
- name: Prepare
|
||||
run: sudo apt update && sudo apt install zfsutils-linux python-setuptools && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls
|
||||
run: sudo apt update && sudo apt install zfsutils-linux python-setuptools lzop pigz zstd gzip xz-utils liblz4-tool && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls colorama
|
||||
|
||||
- name: Regression test
|
||||
run: sudo -E ./tests/run_tests
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@ -11,3 +11,4 @@ __pycache__
|
||||
python2.env
|
||||
venv
|
||||
.idea
|
||||
password.sh
|
||||
|
||||
175
README.md
175
README.md
@ -17,7 +17,7 @@ Since its using ZFS commands, you can see what its actually doing by specifying
|
||||
|
||||
An important feature thats missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your system.
|
||||
|
||||
zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
||||
zfs-autobackup tries to be the easiest to use backup tool for zfs, with the most features.
|
||||
|
||||
## Features
|
||||
|
||||
@ -33,6 +33,7 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
||||
* Or even pull data from a server while pushing the backup to another server. (Zero trust between source and target server)
|
||||
* Can be scheduled via a simple cronjob or run directly from commandline.
|
||||
* Supports resuming of interrupted transfers.
|
||||
* ZFS encryption support: Can decrypt / encrypt or even re-encrypt datasets during transfer.
|
||||
* Multiple backups from and to the same datasets are no problem.
|
||||
* Creates the snapshot before doing anything else. (assuring you at least have a snapshot if all else fails)
|
||||
* Checks everything but tries continue on non-fatal errors when possible. (Reports error-count when done)
|
||||
@ -42,14 +43,17 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
||||
* Uses zfs-holds on important snapshots so they cant be accidentally destroyed.
|
||||
* Automatic resuming of failed transfers.
|
||||
* Can continue from existing common snapshots. (e.g. easy migration)
|
||||
* Gracefully handles destroyed datasets on source.
|
||||
* Gracefully handles datasets that no longer exist on source.
|
||||
* Easy installation:
|
||||
* Just install zfs-autobackup via pip, or download it manually.
|
||||
* Only needs to be installed on one side.
|
||||
* Written in python and uses zfs-commands, no 3rd party dependency's or libraries needed.
|
||||
* No separate config files or properties. Just one zfs-autobackup command you can copy/paste in your backup script.
|
||||
|
||||
## Installation
|
||||
|
||||
You only need to install zfs-autobackup on the side that initiates the backup. The other side doesnt need any extra configration.
|
||||
|
||||
### Using pip
|
||||
|
||||
The recommended way on most servers is to use [pip](https://pypi.org/project/zfs-autobackup/):
|
||||
@ -60,6 +64,8 @@ The recommended way on most servers is to use [pip](https://pypi.org/project/zfs
|
||||
|
||||
This can also be used to upgrade zfs-autobackup to the newest stable version.
|
||||
|
||||
To install the latest beta version add the `--pre` option.
|
||||
|
||||
### Using easy_install
|
||||
|
||||
On older servers you might have to use easy_install
|
||||
@ -367,6 +373,29 @@ zfs-autobackup will re-evaluate this on every run: As soon as a snapshot doesn't
|
||||
|
||||
Snapshots on the source that still have to be send to the target wont be destroyed off course. (If the target still wants them, according to the target schedule)
|
||||
|
||||
## How zfs-autobackup handles encryption
|
||||
|
||||
In normal operation datasets are transferred unaltered:
|
||||
|
||||
* Source datasets that are encrypted will be send over as such and stay encrypted at the target side. (In ZFS this is called raw-mode) You dont need keys at the target side if you dont want to access the data.
|
||||
* Source datasets that are plain will stay that way on the target. (Even if the specified target-path IS encrypted.)
|
||||
|
||||
Basically you dont have to do anything or worry about anything.
|
||||
|
||||
### Decrypting/encrypting
|
||||
|
||||
Things get different if you want to change the encryption-state of a dataset during transfer:
|
||||
|
||||
* If you want to decrypt encrypted datasets before sending them, you should use the `--decrypt` option. Datasets will then be stored plain at the target.
|
||||
* If you want to encrypt plain datasets when they are received, you should use the `--encrypt` option. Datasets will then be stored encrypted at the target. (Datasets that are already encrypted will still be sent over unaltered in raw-mode.)
|
||||
* If you also want re-encrypt encrypted datasets with the target-side encryption you can use both options.
|
||||
|
||||
Note 1: The --encrypt option will rely on inheriting encryption parameters from the parent datasets on the target side. You are responsible for setting those up and loading the keys. So --encrypt is no guarantee for encryption: If you dont set it up, it cant encrypt.
|
||||
|
||||
Note 2: Decide what you want at an early stage: If you change the --encrypt or --decrypt parameter after the inital sync you might get weird and wonderfull errors. (nothing dangerous)
|
||||
|
||||
I'll add some tips when the issues start to get in on github. :)
|
||||
|
||||
## Tips
|
||||
|
||||
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
|
||||
@ -436,105 +465,67 @@ Look in man ssh_config for many more options.
|
||||
|
||||
## Usage
|
||||
|
||||
Here you find all the options:
|
||||
(NOTE: Quite a lot has changed since the current stable version 3.0. The page your are viewing is for upcoming version 3.1 which is still in beta.)
|
||||
|
||||
```console
|
||||
[root@server ~]# zfs-autobackup --help
|
||||
usage: zfs-autobackup [-h] [--ssh-config SSH_CONFIG] [--ssh-source SSH_SOURCE]
|
||||
[--ssh-target SSH_TARGET] [--keep-source KEEP_SOURCE]
|
||||
[--keep-target KEEP_TARGET] [--other-snapshots]
|
||||
[--no-snapshot] [--no-send] [--min-change MIN_CHANGE]
|
||||
[--allow-empty] [--ignore-replicated] [--no-holds]
|
||||
[--strip-path STRIP_PATH] [--clear-refreservation]
|
||||
[--clear-mountpoint]
|
||||
[--filter-properties FILTER_PROPERTIES]
|
||||
[--set-properties SET_PROPERTIES] [--rollback]
|
||||
[--destroy-incompatible] [--ignore-transfer-errors]
|
||||
[--raw] [--test] [--verbose] [--debug] [--debug-output]
|
||||
[--progress]
|
||||
usage: zfs-autobackup [-h] [--ssh-config CONFIG-FILE] [--ssh-source USER@HOST] [--ssh-target USER@HOST] [--keep-source SCHEDULE] [--keep-target SCHEDULE] [--other-snapshots] [--no-snapshot] [--no-send]
|
||||
[--no-thinning] [--no-holds] [--min-change BYTES] [--allow-empty] [--ignore-replicated] [--strip-path N] [--clear-refreservation] [--clear-mountpoint] [--filter-properties PROPERY,...]
|
||||
[--set-properties PROPERTY=VALUE,...] [--rollback] [--destroy-incompatible] [--destroy-missing SCHEDULE] [--ignore-transfer-errors] [--decrypt] [--encrypt] [--test] [--verbose] [--debug]
|
||||
[--debug-output] [--progress] [--send-pipe COMMAND] [--recv-pipe COMMAND]
|
||||
backup-name [target-path]
|
||||
|
||||
zfs-autobackup v3.0-rc12 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
||||
zfs-autobackup v3.1-beta3 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
||||
|
||||
positional arguments:
|
||||
backup-name Name of the backup (you should set the zfs property
|
||||
"autobackup:backup-name" to true on filesystems you
|
||||
want to backup
|
||||
target-path Target ZFS filesystem (optional: if not specified,
|
||||
zfs-autobackup will only operate as snapshot-tool on
|
||||
source)
|
||||
backup-name Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup
|
||||
target-path Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate as snapshot-tool on source)
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--ssh-config SSH_CONFIG
|
||||
--ssh-config CONFIG-FILE
|
||||
Custom ssh client config
|
||||
--ssh-source SSH_SOURCE
|
||||
Source host to get backup from. (user@hostname)
|
||||
Default None.
|
||||
--ssh-target SSH_TARGET
|
||||
Target host to push backup to. (user@hostname) Default
|
||||
None.
|
||||
--keep-source KEEP_SOURCE
|
||||
Thinning schedule for old source snapshots. Default:
|
||||
10,1d1w,1w1m,1m1y
|
||||
--keep-target KEEP_TARGET
|
||||
Thinning schedule for old target snapshots. Default:
|
||||
10,1d1w,1w1m,1m1y
|
||||
--other-snapshots Send over other snapshots as well, not just the ones
|
||||
created by this tool.
|
||||
--no-snapshot Don't create new snapshots (useful for finishing
|
||||
uncompleted backups, or cleanups)
|
||||
--no-send Don't send snapshots (useful for cleanups, or if you
|
||||
want a serperate send-cronjob)
|
||||
--min-change MIN_CHANGE
|
||||
Number of bytes written after which we consider a
|
||||
dataset changed (default 1)
|
||||
--allow-empty If nothing has changed, still create empty snapshots.
|
||||
(same as --min-change=0)
|
||||
--ignore-replicated Ignore datasets that seem to be replicated some other
|
||||
way. (No changes since lastest snapshot. Useful for
|
||||
proxmox HA replication)
|
||||
--no-holds Don't lock snapshots on the source. (Useful to allow
|
||||
proxmox HA replication to switches nodes)
|
||||
--strip-path STRIP_PATH
|
||||
Number of directories to strip from target path (use 1
|
||||
when cloning zones between 2 SmartOS machines)
|
||||
--ssh-source USER@HOST
|
||||
Source host to get backup from.
|
||||
--ssh-target USER@HOST
|
||||
Target host to push backup to.
|
||||
--keep-source SCHEDULE
|
||||
Thinning schedule for old source snapshots. Default: 10,1d1w,1w1m,1m1y
|
||||
--keep-target SCHEDULE
|
||||
Thinning schedule for old target snapshots. Default: 10,1d1w,1w1m,1m1y
|
||||
--other-snapshots Send over other snapshots as well, not just the ones created by this tool.
|
||||
--no-snapshot Don't create new snapshots (useful for finishing uncompleted backups, or cleanups)
|
||||
--no-send Don't send snapshots (useful for cleanups, or if you want a serperate send-cronjob)
|
||||
--no-thinning Do not destroy any snapshots.
|
||||
--no-holds Don't hold snapshots. (Faster. Allows you to destroy common snapshot.)
|
||||
--min-change BYTES Number of bytes written after which we consider a dataset changed (default 1)
|
||||
--allow-empty If nothing has changed, still create empty snapshots. (same as --min-change=0)
|
||||
--ignore-replicated Ignore datasets that seem to be replicated some other way. (No changes since lastest snapshot. Useful for proxmox HA replication)
|
||||
--strip-path N Number of directories to strip from target path (use 1 when cloning zones between 2 SmartOS machines)
|
||||
--clear-refreservation
|
||||
Filter "refreservation" property. (recommended, safes
|
||||
space. same as --filter-properties refreservation)
|
||||
--clear-mountpoint Set property canmount=noauto for new datasets.
|
||||
(recommended, prevents mount conflicts. same as --set-
|
||||
properties canmount=noauto)
|
||||
--filter-properties FILTER_PROPERTIES
|
||||
List of properties to "filter" when receiving
|
||||
filesystems. (you can still restore them with zfs
|
||||
inherit -S)
|
||||
--set-properties SET_PROPERTIES
|
||||
List of propererties to override when receiving
|
||||
filesystems. (you can still restore them with zfs
|
||||
inherit -S)
|
||||
--rollback Rollback changes to the latest target snapshot before
|
||||
starting. (normally you can prevent changes by setting
|
||||
the readonly property on the target_path to on)
|
||||
Filter "refreservation" property. (recommended, safes space. same as --filter-properties refreservation)
|
||||
--clear-mountpoint Set property canmount=noauto for new datasets. (recommended, prevents mount conflicts. same as --set-properties canmount=noauto)
|
||||
--filter-properties PROPERY,...
|
||||
List of properties to "filter" when receiving filesystems. (you can still restore them with zfs inherit -S)
|
||||
--set-properties PROPERTY=VALUE,...
|
||||
List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S)
|
||||
--rollback Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on)
|
||||
--destroy-incompatible
|
||||
Destroy incompatible snapshots on target. Use with
|
||||
care! (implies --rollback)
|
||||
Destroy incompatible snapshots on target. Use with care! (implies --rollback)
|
||||
--destroy-missing SCHEDULE
|
||||
Destroy datasets on target that are missing on the source. Specify the time since the last snapshot, e.g: --destroy-missing 30d
|
||||
--ignore-transfer-errors
|
||||
Ignore transfer errors (still checks if received
|
||||
filesystem exists. useful for acltype errors)
|
||||
--raw For encrypted datasets, send data exactly as it exists
|
||||
on disk.
|
||||
--test dont change anything, just show what would be done
|
||||
(still does all read-only operations)
|
||||
Ignore transfer errors (still checks if received filesystem exists. useful for acltype errors)
|
||||
--decrypt Decrypt data before sending it over.
|
||||
--encrypt Encrypt data after receiving it.
|
||||
--test dont change anything, just show what would be done (still does all read-only operations)
|
||||
--verbose verbose output
|
||||
--debug Show zfs commands that are executed, stops after an
|
||||
exception.
|
||||
--debug Show zfs commands that are executed, stops after an exception.
|
||||
--debug-output Show zfs commands and their output/exit codes. (noisy)
|
||||
--progress show zfs progress output (to stderr). Enabled by
|
||||
default on ttys.
|
||||
--progress show zfs progress output. Enabled automaticly on ttys. (use --no-progress to disable)
|
||||
--send-pipe COMMAND pipe zfs send output through COMMAND
|
||||
--recv-pipe COMMAND pipe zfs recv input through COMMAND
|
||||
|
||||
When a filesystem fails, zfs_backup will continue and report the number of
|
||||
failures at that end. Also the exit code will indicate the number of failures.
|
||||
Full manual at: https://github.com/psy0rz/zfs_autobackup
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
@ -551,15 +542,19 @@ This usually means you've created a new snapshot on the target side during a bac
|
||||
|
||||
This means files have been modified on the target side somehow.
|
||||
|
||||
You can use --rollback to automaticly rollback such changes.
|
||||
|
||||
Note: This usually happens if the source-side has a non-standard mountpoint for a dataset, and you're using --clear-mountpoint. In this case the target side creates a mountpoint in the parent dataset, causing the change.
|
||||
You can use --rollback to automaticly rollback such changes. Also try destroying the target dataset and using --clear-mountpoint on the next run. This way it wont get mounted.
|
||||
|
||||
### It says 'internal error: Invalid argument'
|
||||
|
||||
In some cases (Linux -> FreeBSD) this means certain properties are not fully supported on the target system.
|
||||
|
||||
Try using something like: --filter-properties xattr
|
||||
Try using something like: --filter-properties xattr or --ignore-transfer-errors.
|
||||
|
||||
### zfs receive fails, but snapshot seems to be received successful.
|
||||
|
||||
This happens if you transfer between different Operating systems/zfs versions or feature sets.
|
||||
|
||||
Try using the --ignore-transfer-errors option. This will ignore the error. It will still check if the snapshot is actually received correctly.
|
||||
|
||||
## Restore example
|
||||
|
||||
@ -669,4 +664,4 @@ This script will also send the backup status to Zabbix. (if you've installed my
|
||||
|
||||
This project was sponsorred by:
|
||||
|
||||
* (None so far)
|
||||
* JetBrains (Provided me with a license for their whole professional product line, https://www.jetbrains.com/pycharm/ )
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
colorama
|
||||
argparse
|
||||
coverage==4.5.4
|
||||
coverage
|
||||
python-coveralls
|
||||
unittest2
|
||||
mock
|
||||
|
||||
6
tests/autoruntests
Executable file
6
tests/autoruntests
Executable file
@ -0,0 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
#NOTE: run from top directory
|
||||
|
||||
find tests/*.py zfs_autobackup/*.py| entr -r ./tests/run_tests $@
|
||||
|
||||
@ -58,7 +58,8 @@ def redirect_stderr(target):
|
||||
|
||||
def shelltest(cmd):
|
||||
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
|
||||
ret=(subprocess.check_output(cmd , shell=True).decode('utf-8'))
|
||||
|
||||
ret=(subprocess.check_output("SUDO_ASKPASS=./password.sh sudo -A "+cmd , shell=True).decode('utf-8'))
|
||||
print("######### result of: {}".format(cmd))
|
||||
print(ret)
|
||||
print("#########")
|
||||
|
||||
@ -19,7 +19,7 @@ if ! [ -e /root/.ssh/id_rsa ]; then
|
||||
fi
|
||||
|
||||
|
||||
coverage run --source zfs_autobackup -m unittest discover -vvvvf $SCRIPTDIR $@ 2>&1
|
||||
coverage run --branch --source zfs_autobackup -m unittest discover -vvvvf $SCRIPTDIR $@ 2>&1
|
||||
EXIT=$?
|
||||
|
||||
echo
|
||||
|
||||
123
tests/test_cmdpipe.py
Normal file
123
tests/test_cmdpipe.py
Normal file
@ -0,0 +1,123 @@
|
||||
from basetest import *
|
||||
from zfs_autobackup.CmdPipe import CmdPipe,CmdItem
|
||||
|
||||
|
||||
class TestCmdPipe(unittest2.TestCase):
|
||||
|
||||
def test_single(self):
|
||||
"""single process stdout and stderr"""
|
||||
p=CmdPipe(readonly=False, inp=None)
|
||||
err=[]
|
||||
out=[]
|
||||
p.add(CmdItem(["ls", "-d", "/", "/", "/nonexistent"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
||||
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||
|
||||
self.assertEqual(err, ["ls: cannot access '/nonexistent': No such file or directory"])
|
||||
self.assertEqual(out, ["/","/"])
|
||||
self.assertIsNone(executed)
|
||||
|
||||
def test_input(self):
|
||||
"""test stdinput"""
|
||||
p=CmdPipe(readonly=False, inp="test")
|
||||
err=[]
|
||||
out=[]
|
||||
p.add(CmdItem(["echo", "test"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
|
||||
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||
|
||||
self.assertEqual(err, [])
|
||||
self.assertEqual(out, ["test"])
|
||||
self.assertIsNone(executed)
|
||||
|
||||
def test_pipe(self):
|
||||
"""test piped"""
|
||||
p=CmdPipe(readonly=False)
|
||||
err1=[]
|
||||
err2=[]
|
||||
err3=[]
|
||||
out=[]
|
||||
p.add(CmdItem(["echo", "test"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
|
||||
p.add(CmdItem(["tr", "e", "E"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
|
||||
p.add(CmdItem(["tr", "t", "T"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0)))
|
||||
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||
|
||||
self.assertEqual(err1, [])
|
||||
self.assertEqual(err2, [])
|
||||
self.assertEqual(err3, [])
|
||||
self.assertEqual(out, ["TEsT"])
|
||||
self.assertIsNone(executed)
|
||||
|
||||
#test str representation as well
|
||||
self.assertEqual(str(p), "(echo test) | (tr e E) | (tr t T)")
|
||||
|
||||
def test_pipeerrors(self):
|
||||
"""test piped stderrs """
|
||||
p=CmdPipe(readonly=False)
|
||||
err1=[]
|
||||
err2=[]
|
||||
err3=[]
|
||||
out=[]
|
||||
p.add(CmdItem(["ls", "/nonexistent1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
||||
p.add(CmdItem(["ls", "/nonexistent2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
||||
p.add(CmdItem(["ls", "/nonexistent3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
||||
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||
|
||||
self.assertEqual(err1, ["ls: cannot access '/nonexistent1': No such file or directory"])
|
||||
self.assertEqual(err2, ["ls: cannot access '/nonexistent2': No such file or directory"])
|
||||
self.assertEqual(err3, ["ls: cannot access '/nonexistent3': No such file or directory"])
|
||||
self.assertEqual(out, [])
|
||||
self.assertIsNone(executed)
|
||||
|
||||
def test_exitcode(self):
|
||||
"""test piped exitcodes """
|
||||
p=CmdPipe(readonly=False)
|
||||
err1=[]
|
||||
err2=[]
|
||||
err3=[]
|
||||
out=[]
|
||||
p.add(CmdItem(["bash", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1)))
|
||||
p.add(CmdItem(["bash", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
||||
p.add(CmdItem(["bash", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3)))
|
||||
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||
|
||||
self.assertEqual(err1, [])
|
||||
self.assertEqual(err2, [])
|
||||
self.assertEqual(err3, [])
|
||||
self.assertEqual(out, [])
|
||||
self.assertIsNone(executed)
|
||||
|
||||
def test_readonly_execute(self):
|
||||
"""everything readonly, just should execute"""
|
||||
|
||||
p=CmdPipe(readonly=True)
|
||||
err1=[]
|
||||
err2=[]
|
||||
out=[]
|
||||
|
||||
def true_exit(exit_code):
|
||||
return True
|
||||
|
||||
p.add(CmdItem(["echo", "test1"], stderr_handler=lambda line: err1.append(line), exit_handler=true_exit, readonly=True))
|
||||
p.add(CmdItem(["echo", "test2"], stderr_handler=lambda line: err2.append(line), exit_handler=true_exit, readonly=True))
|
||||
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||
|
||||
self.assertEqual(err1, [])
|
||||
self.assertEqual(err2, [])
|
||||
self.assertEqual(out, ["test2"])
|
||||
self.assertTrue(executed)
|
||||
|
||||
def test_readonly_skip(self):
|
||||
"""one command not readonly, skip"""
|
||||
|
||||
p=CmdPipe(readonly=True)
|
||||
err1=[]
|
||||
err2=[]
|
||||
out=[]
|
||||
p.add(CmdItem(["echo", "test1"], stderr_handler=lambda line: err1.append(line), readonly=False))
|
||||
p.add(CmdItem(["echo", "test2"], stderr_handler=lambda line: err2.append(line), readonly=True))
|
||||
executed=p.execute(stdout_handler=lambda line: out.append(line))
|
||||
|
||||
self.assertEqual(err1, [])
|
||||
self.assertEqual(err2, [])
|
||||
self.assertEqual(out, [])
|
||||
self.assertTrue(executed)
|
||||
|
||||
@ -14,16 +14,16 @@ class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
#initial backup
|
||||
with patch('time.strftime', return_value="10101111000000"): #1000 years in past
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"): #far in past
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||
|
||||
|
||||
with self.subTest("Should do nothing yet"):
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
self.assertNotIn(": Destroy missing", buf.getvalue())
|
||||
@ -36,7 +36,7 @@ class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf), redirect_stderr(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#should have done the snapshot cleanup for destoy missing:
|
||||
@ -54,7 +54,7 @@ class TestZfsNode(unittest2.TestCase):
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
#100y: lastest should not be old enough, while second to latest snapshot IS old enough:
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 100y".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 100y".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
self.assertIn(": Waiting for deadline", buf.getvalue())
|
||||
@ -62,7 +62,7 @@ class TestZfsNode(unittest2.TestCase):
|
||||
#past deadline, destroy
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 1y".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 1y".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
self.assertIn("sub: Destroying", buf.getvalue())
|
||||
@ -75,7 +75,7 @@ class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
@ -90,13 +90,13 @@ class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf), redirect_stderr(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#now tries to destroy our own last snapshot (before the final destroy of the dataset)
|
||||
self.assertIn("fs1@test-20101111000000: Destroying", buf.getvalue())
|
||||
#but cant finish because still in use:
|
||||
self.assertIn("fs1: Error during destoy missing", buf.getvalue())
|
||||
self.assertIn("fs1: Error during --destroy-missing", buf.getvalue())
|
||||
|
||||
shelltest("zfs destroy test_target1/clone1")
|
||||
|
||||
@ -105,7 +105,7 @@ class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf), redirect_stderr(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#should have done the snapshot cleanup for destoy missing:
|
||||
@ -113,7 +113,7 @@ class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf), redirect_stderr(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#on second run it sees the dangling ex-parent but doesnt know what to do with it (since it has no own snapshot)
|
||||
|
||||
192
tests/test_encryption.py
Normal file
192
tests/test_encryption.py
Normal file
@ -0,0 +1,192 @@
|
||||
from zfs_autobackup.CmdPipe import CmdPipe
|
||||
from basetest import *
|
||||
import time
|
||||
|
||||
# We have to do a LOT to properly test encryption/decryption/raw transfers
|
||||
#
|
||||
# For every scenario we need at least:
|
||||
# - plain source dataset
|
||||
# - encrypted source dataset
|
||||
# - plain target path
|
||||
# - encrypted target path
|
||||
# - do a full transfer
|
||||
# - do a incremental transfer
|
||||
|
||||
# Scenarios:
|
||||
# - Raw transfer
|
||||
# - Decryption transfer (--decrypt)
|
||||
# - Encryption transfer (--encrypt)
|
||||
# - Re-encryption transfer (--decrypt --encrypt)
|
||||
|
||||
class TestZfsEncryption(unittest2.TestCase):
|
||||
|
||||
|
||||
def setUp(self):
|
||||
prepare_zpools()
|
||||
|
||||
try:
|
||||
shelltest("zfs get encryption test_source1")
|
||||
except:
|
||||
self.skipTest("Encryption not supported on this ZFS version.")
|
||||
|
||||
def prepare_encrypted_dataset(self, key, path, unload_key=False):
|
||||
|
||||
# create encrypted source dataset
|
||||
shelltest("echo {} > /tmp/zfstest.key".format(key))
|
||||
shelltest("zfs create -o keylocation=file:///tmp/zfstest.key -o keyformat=passphrase -o encryption=on {}".format(path))
|
||||
|
||||
if unload_key:
|
||||
shelltest("zfs unmount {}".format(path))
|
||||
shelltest("zfs unload-key {}".format(path))
|
||||
|
||||
# r=shelltest("dd if=/dev/zero of=/test_source1/fs1/enc1/data.txt bs=200000 count=1")
|
||||
|
||||
def test_raw(self):
|
||||
"""send encrypted data unaltered (standard operation)"""
|
||||
|
||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsourcekeyless", unload_key=True) # raw mode shouldn't need a key
|
||||
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
test_target1 encryptionroot - -
|
||||
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1/fs1 encryptionroot - -
|
||||
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
|
||||
test_target1/encryptedtarget/test_source1/fs1/encryptedsourcekeyless encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsourcekeyless -
|
||||
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot - -
|
||||
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot - -
|
||||
test_target1/test_source1 encryptionroot - -
|
||||
test_target1/test_source1/fs1 encryptionroot - -
|
||||
test_target1/test_source1/fs1/encryptedsource encryptionroot test_target1/test_source1/fs1/encryptedsource -
|
||||
test_target1/test_source1/fs1/encryptedsourcekeyless encryptionroot test_target1/test_source1/fs1/encryptedsourcekeyless -
|
||||
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||
test_target1/test_source2 encryptionroot - -
|
||||
test_target1/test_source2/fs2 encryptionroot - -
|
||||
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||
""")
|
||||
|
||||
def test_decrypt(self):
|
||||
"""decrypt data and store unencrypted (--decrypt)"""
|
||||
|
||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||
self.assertEqual(r, """
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
test_target1 encryptionroot - -
|
||||
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1/fs1 encryptionroot - -
|
||||
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot - -
|
||||
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot - -
|
||||
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot - -
|
||||
test_target1/test_source1 encryptionroot - -
|
||||
test_target1/test_source1/fs1 encryptionroot - -
|
||||
test_target1/test_source1/fs1/encryptedsource encryptionroot - -
|
||||
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||
test_target1/test_source2 encryptionroot - -
|
||||
test_target1/test_source2/fs2 encryptionroot - -
|
||||
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||
""")
|
||||
|
||||
def test_encrypt(self):
|
||||
"""send normal data set and store encrypted on the other side (--encrypt) issue #60 """
|
||||
|
||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||
self.assertEqual(r, """
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
test_target1 encryptionroot - -
|
||||
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1/fs1 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
|
||||
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/test_source1 encryptionroot - -
|
||||
test_target1/test_source1/fs1 encryptionroot - -
|
||||
test_target1/test_source1/fs1/encryptedsource encryptionroot test_target1/test_source1/fs1/encryptedsource -
|
||||
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||
test_target1/test_source2 encryptionroot - -
|
||||
test_target1/test_source2/fs2 encryptionroot - -
|
||||
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||
""")
|
||||
|
||||
def test_reencrypt(self):
|
||||
"""reencrypt data (--decrypt --encrypt) """
|
||||
|
||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup(
|
||||
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup(
|
||||
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot --exclude-received".split(
|
||||
" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup(
|
||||
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup(
|
||||
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot --exclude-received".split(
|
||||
" ")).run())
|
||||
|
||||
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||
self.assertEqual(r, """
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
test_target1 encryptionroot - -
|
||||
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1/fs1 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot test_target1/encryptedtarget -
|
||||
test_target1/test_source1 encryptionroot - -
|
||||
test_target1/test_source1/fs1 encryptionroot - -
|
||||
test_target1/test_source1/fs1/encryptedsource encryptionroot - -
|
||||
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||
test_target1/test_source2 encryptionroot - -
|
||||
test_target1/test_source2/fs2 encryptionroot - -
|
||||
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||
""")
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
from basetest import *
|
||||
from zfs_autobackup.ExecuteNode import ExecuteNode
|
||||
from zfs_autobackup.ExecuteNode import *
|
||||
|
||||
print("THIS TEST REQUIRES SSH TO LOCALHOST")
|
||||
|
||||
@ -15,7 +15,7 @@ class TestExecuteNode(unittest2.TestCase):
|
||||
self.assertEqual(node.run(["echo","test"]), ["test"])
|
||||
|
||||
with self.subTest("error exit code"):
|
||||
with self.assertRaises(subprocess.CalledProcessError):
|
||||
with self.assertRaises(ExecuteError):
|
||||
node.run(["false"])
|
||||
|
||||
#
|
||||
@ -26,9 +26,9 @@ class TestExecuteNode(unittest2.TestCase):
|
||||
with self.subTest("multiline tabsplit"):
|
||||
self.assertEqual(node.run(["echo","l1c1\tl1c2\nl2c1\tl2c2"], tab_split=True), [['l1c1', 'l1c2'], ['l2c1', 'l2c2']])
|
||||
|
||||
#escaping test (shouldnt be a problem locally, single quotes can be a problem remote via ssh)
|
||||
#escaping test
|
||||
with self.subTest("escape test"):
|
||||
s="><`'\"@&$()$bla\\/.*!#test _+-={}[]|"
|
||||
s="><`'\"@&$()$bla\\/.* !#test _+-={}[]|${bla} $bla"
|
||||
self.assertEqual(node.run(["echo",s]), [s])
|
||||
|
||||
#return std err as well, trigger stderr by listing something non existing
|
||||
@ -51,6 +51,15 @@ class TestExecuteNode(unittest2.TestCase):
|
||||
with self.subTest("stdin process with inp=None (shouldn't hang)"):
|
||||
self.assertEqual(node.run(["cat"]), [])
|
||||
|
||||
# let the system do the piping with an unescaped |:
|
||||
with self.subTest("system piping test"):
|
||||
|
||||
#first make sure the actual | character is still properly escaped:
|
||||
self.assertEqual(node.run(["echo","|"]), ["|"])
|
||||
|
||||
#now pipe
|
||||
self.assertEqual(node.run(["echo", "abc", node.PIPE, "tr", "a", "A" ]), ["Abc"])
|
||||
|
||||
def test_basics_local(self):
|
||||
node=ExecuteNode(debug_output=True)
|
||||
self.basics(node)
|
||||
@ -64,7 +73,7 @@ class TestExecuteNode(unittest2.TestCase):
|
||||
def test_readonly(self):
|
||||
node=ExecuteNode(debug_output=True, readonly=True)
|
||||
|
||||
self.assertEqual(node.run(["echo","test"], readonly=False), None)
|
||||
self.assertEqual(node.run(["echo","test"], readonly=False), [])
|
||||
self.assertEqual(node.run(["echo","test"], readonly=True), ["test"])
|
||||
|
||||
|
||||
@ -81,29 +90,33 @@ class TestExecuteNode(unittest2.TestCase):
|
||||
nodeb.run(["true"], inp=output)
|
||||
|
||||
with self.subTest("error on pipe input side"):
|
||||
with self.assertRaises(subprocess.CalledProcessError):
|
||||
with self.assertRaises(ExecuteError):
|
||||
output=nodea.run(["false"], pipe=True)
|
||||
nodeb.run(["true"], inp=output)
|
||||
|
||||
with self.subTest("error on both sides, ignore exit codes"):
|
||||
output=nodea.run(["false"], pipe=True, valid_exitcodes=[])
|
||||
nodeb.run(["false"], inp=output, valid_exitcodes=[])
|
||||
|
||||
with self.subTest("error on pipe output side "):
|
||||
with self.assertRaises(subprocess.CalledProcessError):
|
||||
with self.assertRaises(ExecuteError):
|
||||
output=nodea.run(["true"], pipe=True)
|
||||
nodeb.run(["false"], inp=output)
|
||||
|
||||
with self.subTest("error on both sides of pipe"):
|
||||
with self.assertRaises(subprocess.CalledProcessError):
|
||||
with self.assertRaises(ExecuteError):
|
||||
output=nodea.run(["false"], pipe=True)
|
||||
nodeb.run(["false"], inp=output)
|
||||
|
||||
with self.subTest("check stderr on pipe output side"):
|
||||
output=nodea.run(["true"], pipe=True)
|
||||
(stdout, stderr)=nodeb.run(["ls", "nonexistingfile"], inp=output, return_stderr=True, valid_exitcodes=[0,2])
|
||||
output=nodea.run(["true"], pipe=True, valid_exitcodes=[0])
|
||||
(stdout, stderr)=nodeb.run(["ls", "nonexistingfile"], inp=output, return_stderr=True, valid_exitcodes=[2])
|
||||
self.assertEqual(stdout,[])
|
||||
self.assertRegex(stderr[0], "nonexistingfile" )
|
||||
|
||||
with self.subTest("check stderr on pipe input side (should be only printed)"):
|
||||
output=nodea.run(["ls", "nonexistingfile"], pipe=True)
|
||||
(stdout, stderr)=nodeb.run(["true"], inp=output, return_stderr=True, valid_exitcodes=[0,2])
|
||||
output=nodea.run(["ls", "nonexistingfile"], pipe=True, valid_exitcodes=[2])
|
||||
(stdout, stderr)=nodeb.run(["true"], inp=output, return_stderr=True, valid_exitcodes=[0])
|
||||
self.assertEqual(stdout,[])
|
||||
self.assertEqual(stderr,[])
|
||||
|
||||
|
||||
@ -20,7 +20,7 @@ class TestExternalFailures(unittest2.TestCase):
|
||||
r = shelltest("dd if=/dev/zero of=/test_target1/waste bs=250M count=1")
|
||||
|
||||
# should fail and leave resume token (if supported)
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
# free up space
|
||||
r = shelltest("rm /test_target1/waste")
|
||||
@ -38,7 +38,7 @@ class TestExternalFailures(unittest2.TestCase):
|
||||
# --test should resume and succeed
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
@ -52,7 +52,7 @@ class TestExternalFailures(unittest2.TestCase):
|
||||
# should resume and succeed
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
@ -82,7 +82,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
|
||||
# initial backup
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
# incremental backup leaves resume token
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
@ -91,7 +91,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
# --test should resume and succeed
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
@ -105,7 +105,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
# should resume and succeed
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
@ -149,11 +149,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
|
||||
# --test try again, should abort old resume
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||
|
||||
# try again, should abort old resume
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r, """
|
||||
@ -177,7 +177,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
|
||||
# initial backup
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
# icremental backup, leaves resume token
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
@ -188,11 +188,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
|
||||
# --test try again, should abort old resume
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||
|
||||
# try again, should abort old resume
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r, """
|
||||
@ -216,7 +216,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
# generate resume
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
@ -227,7 +227,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
# incremental, doesnt want previous anymore
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup(
|
||||
"test test_target1 --verbose --keep-target=0 --debug --allow-empty".split(" ")).run())
|
||||
"test test_target1 --no-progress --verbose --keep-target=0 --debug --allow-empty".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
@ -250,14 +250,14 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
||||
def test_missing_common(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
# remove common snapshot and leave nothing
|
||||
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
|
||||
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
#UPDATE: offcourse the one thing that wasn't tested had a bug :( (in ExecuteNode.run()).
|
||||
def test_ignoretransfererrors(self):
|
||||
|
||||
52
tests/test_log.py
Normal file
52
tests/test_log.py
Normal file
@ -0,0 +1,52 @@
|
||||
import zfs_autobackup.LogConsole
|
||||
from basetest import *
|
||||
|
||||
|
||||
class TestLog(unittest2.TestCase):
|
||||
|
||||
def test_colored(self):
|
||||
"""test with color output"""
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
l=LogConsole(show_verbose=False, show_debug=False, color=True)
|
||||
l.verbose("verbose")
|
||||
l.debug("debug")
|
||||
|
||||
with redirect_stdout(buf):
|
||||
l=LogConsole(show_verbose=True, show_debug=True, color=True)
|
||||
l.verbose("verbose")
|
||||
l.debug("debug")
|
||||
|
||||
with redirect_stderr(buf):
|
||||
l=LogConsole(show_verbose=False, show_debug=False, color=True)
|
||||
l.error("error")
|
||||
|
||||
print(list(buf.getvalue()))
|
||||
self.assertEqual(list(buf.getvalue()), ['\x1b', '[', '2', '2', 'm', ' ', ' ', 'v', 'e', 'r', 'b', 'o', 's', 'e', '\x1b', '[', '0', 'm', '\n', '\x1b', '[', '3', '2', 'm', '#', ' ', 'd', 'e', 'b', 'u', 'g', '\x1b', '[', '0', 'm', '\n', '\x1b', '[', '3', '1', 'm', '\x1b', '[', '1', 'm', '!', ' ', 'e', 'r', 'r', 'o', 'r', '\x1b', '[', '0', 'm', '\n'])
|
||||
|
||||
def test_nocolor(self):
|
||||
"""test without color output"""
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
l=LogConsole(show_verbose=False, show_debug=False, color=False)
|
||||
l.verbose("verbose")
|
||||
l.debug("debug")
|
||||
|
||||
with redirect_stdout(buf):
|
||||
l=LogConsole(show_verbose=True, show_debug=True, color=False)
|
||||
l.verbose("verbose")
|
||||
l.debug("debug")
|
||||
|
||||
with redirect_stderr(buf):
|
||||
l=LogConsole(show_verbose=False, show_debug=False, color=False)
|
||||
l.error("error")
|
||||
|
||||
print(list(buf.getvalue()))
|
||||
self.assertEqual(list(buf.getvalue()), [' ', ' ', 'v', 'e', 'r', 'b', 'o', 's', 'e', '\n', '#', ' ', 'd', 'e', 'b', 'u', 'g', '\n', '!', ' ', 'e', 'r', 'r', 'o', 'r', '\n'])
|
||||
|
||||
|
||||
zfs_autobackup.LogConsole.colorama=False
|
||||
|
||||
|
||||
|
||||
@ -34,7 +34,7 @@ class TestZfsScaling(unittest2.TestCase):
|
||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||
|
||||
with patch('time.strftime', return_value="20101112000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
||||
|
||||
|
||||
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
||||
@ -47,7 +47,7 @@ class TestZfsScaling(unittest2.TestCase):
|
||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||
|
||||
with patch('time.strftime', return_value="20101112000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
||||
|
||||
|
||||
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
||||
@ -69,11 +69,12 @@ class TestZfsScaling(unittest2.TestCase):
|
||||
|
||||
global run_counter
|
||||
|
||||
#first run
|
||||
run_counter=0
|
||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||
|
||||
with patch('time.strftime', return_value="20101112000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||
|
||||
|
||||
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
||||
@ -82,11 +83,12 @@ class TestZfsScaling(unittest2.TestCase):
|
||||
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
|
||||
|
||||
|
||||
#second run, should have higher number of expected_runs
|
||||
run_counter=0
|
||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||
|
||||
with patch('time.strftime', return_value="20101112000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||
|
||||
|
||||
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
||||
|
||||
49
tests/test_sendrecvpipes.py
Normal file
49
tests/test_sendrecvpipes.py
Normal file
@ -0,0 +1,49 @@
|
||||
import zfs_autobackup.compressors
|
||||
from basetest import *
|
||||
import time
|
||||
|
||||
class TestSendRecvPipes(unittest2.TestCase):
|
||||
"""test input/output pipes for zfs send and recv"""
|
||||
|
||||
def setUp(self):
|
||||
prepare_zpools()
|
||||
self.longMessage=True
|
||||
|
||||
|
||||
|
||||
def test_send_basics(self):
|
||||
"""send basics (remote/local send pipe)"""
|
||||
|
||||
|
||||
with self.subTest("local local pipe"):
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
||||
|
||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||
|
||||
with self.subTest("remote local pipe"):
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
||||
|
||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||
|
||||
with self.subTest("local remote pipe"):
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
||||
|
||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||
|
||||
with self.subTest("remote remote pipe"):
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
||||
|
||||
def test_compress(self):
|
||||
"""send basics (remote/local send pipe)"""
|
||||
|
||||
for compress in zfs_autobackup.compressors.COMPRESS_CMDS.keys():
|
||||
|
||||
with self.subTest("compress "+compress):
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--compress="+compress]).run())
|
||||
|
||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||
@ -23,6 +23,23 @@ class TestThinner(unittest2.TestCase):
|
||||
|
||||
# return super().setUp()
|
||||
|
||||
def test_exceptions(self):
|
||||
with self.assertRaisesRegexp(Exception, "^Invalid period"):
|
||||
ThinnerRule("12X12m")
|
||||
|
||||
with self.assertRaisesRegexp(Exception, "^Invalid ttl"):
|
||||
ThinnerRule("12d12X")
|
||||
|
||||
with self.assertRaisesRegexp(Exception, "^Period cant be"):
|
||||
ThinnerRule("12d1d")
|
||||
|
||||
with self.assertRaisesRegexp(Exception, "^Invalid schedule"):
|
||||
ThinnerRule("XXX")
|
||||
|
||||
with self.assertRaisesRegexp(Exception, "^Number of"):
|
||||
Thinner("-1")
|
||||
|
||||
|
||||
def test_incremental(self):
|
||||
ok=['2023-01-03 10:53:16',
|
||||
'2024-01-02 15:43:29',
|
||||
@ -138,5 +155,5 @@ class TestThinner(unittest2.TestCase):
|
||||
self.assertEqual(result, ok)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
# if __name__ == '__main__':
|
||||
# unittest.main()
|
||||
|
||||
@ -1,8 +1,9 @@
|
||||
from zfs_autobackup.CmdPipe import CmdPipe
|
||||
|
||||
from basetest import *
|
||||
import time
|
||||
|
||||
|
||||
|
||||
class TestZfsAutobackup(unittest2.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
@ -11,13 +12,29 @@ class TestZfsAutobackup(unittest2.TestCase):
|
||||
|
||||
def test_invalidpars(self):
|
||||
|
||||
self.assertEqual(ZfsAutobackup("test test_target1 --keep-source -1".split(" ")).run(), 255)
|
||||
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --keep-source -1".split(" ")).run(), 255)
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --resume --verbose --no-snapshot".split(" ")).run(), 0)
|
||||
|
||||
print(buf.getvalue())
|
||||
self.assertIn("The --resume", buf.getvalue())
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stderr(buf):
|
||||
self.assertEqual(ZfsAutobackup("test test_target_nonexisting --no-progress".split(" ")).run(), 255)
|
||||
|
||||
print(buf.getvalue())
|
||||
# correct message?
|
||||
self.assertIn("Please create this dataset", buf.getvalue())
|
||||
|
||||
|
||||
def test_snapshotmode(self):
|
||||
"""test snapshot tool mode"""
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test --no-progress --verbose".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -35,15 +52,13 @@ test_source2/fs3/sub
|
||||
test_target1
|
||||
""")
|
||||
|
||||
|
||||
|
||||
def test_defaults(self):
|
||||
|
||||
with self.subTest("no datasets selected"):
|
||||
with OutputIO() as buf:
|
||||
with redirect_stderr(buf):
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug".split(" ")).run())
|
||||
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug --no-progress".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#correct message?
|
||||
@ -53,7 +68,7 @@ test_target1
|
||||
with self.subTest("defaults with full verbose and debug"):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug --no-progress".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -82,7 +97,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
|
||||
with self.subTest("bare defaults, allow empty"):
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --no-progress".split(" ")).run())
|
||||
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
@ -153,14 +168,14 @@ test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
||||
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
|
||||
with self.subTest("test time checking"):
|
||||
with patch('time.strftime', return_value="20111111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --no-progress".split(" ")).run())
|
||||
|
||||
|
||||
time_str="20111112000000" #month in the "future"
|
||||
future_timestamp=time_secs=time.mktime(time.strptime(time_str,"%Y%m%d%H%M%S"))
|
||||
with patch('time.time', return_value=future_timestamp):
|
||||
with patch('time.strftime', return_value="20111111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y --no-progress".split(" ")).run())
|
||||
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
@ -194,14 +209,13 @@ test_target1/test_source2/fs2/sub@test-20111111000000
|
||||
test_target1/test_source2/fs2/sub@test-20111111000001
|
||||
""")
|
||||
|
||||
|
||||
def test_ignore_othersnaphots(self):
|
||||
|
||||
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
||||
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -236,7 +250,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --other-snapshots".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --other-snapshots".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -271,7 +285,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
def test_nosnapshot(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --no-progress".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
#(only parents are created )
|
||||
@ -295,7 +309,7 @@ test_target1/test_source2/fs2
|
||||
def test_nosend(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send --no-progress".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
#(only parents are created )
|
||||
@ -320,7 +334,7 @@ test_target1
|
||||
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --ignore-replicated".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --ignore-replicated".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
#(only parents are created )
|
||||
@ -351,7 +365,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
def test_noholds(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --no-progress".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -383,7 +397,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
||||
def test_strippath(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1 --no-progress".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -419,7 +433,7 @@ test_target1/fs2/sub@test-20101111000000
|
||||
r=shelltest("zfs set refreservation=1M test_source1/fs1")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --clear-refreservation".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-refreservation".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs get refreservation -r test_source1 test_source2 test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -457,7 +471,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 refreservation -
|
||||
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --clear-mountpoint --debug".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-mountpoint --debug".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs get canmount -r test_source1 test_source2 test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -490,7 +504,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||
|
||||
#initial backup
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
#make change
|
||||
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
||||
@ -498,18 +512,18 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
#should fail (busy)
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
#rollback, should succeed
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --rollback".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --rollback".split(" ")).run())
|
||||
|
||||
|
||||
def test_destroyincompat(self):
|
||||
|
||||
#initial backup
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
#add multiple compatible snapshot (written is still 0)
|
||||
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
|
||||
@ -517,7 +531,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
#should be ok, is compatible
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
#add incompatible snapshot by changing and snapshotting
|
||||
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
||||
@ -527,20 +541,44 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
#--test should fail, now incompatible
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty --test".split(" ")).run())
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --test".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
#should fail, now incompatible
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000003"):
|
||||
#--test should succeed by destroying incompatibles
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000003"):
|
||||
#should succeed by destroying incompatibles
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --destroy-incompatible".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r, """
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@compatible1
|
||||
test_target1/test_source1/fs1@compatible2
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1@test-20101111000002
|
||||
test_target1/test_source1/fs1@test-20101111000003
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub@test-20101111000002
|
||||
test_target1/test_source1/fs1/sub@test-20101111000003
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||
test_target1/test_source2/fs2/sub@test-20101111000002
|
||||
test_target1/test_source2/fs2/sub@test-20101111000003
|
||||
""")
|
||||
|
||||
|
||||
|
||||
@ -552,13 +590,13 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||
#test all ssh directions
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-source localhost".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --exclude-received".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-target localhost".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-target localhost --exclude-received".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
|
||||
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
@ -603,7 +641,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
||||
|
||||
#initial
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||
|
||||
#make small change, use umount to reflect the changes immediately
|
||||
r=shelltest("zfs set compress=off test_source1")
|
||||
@ -613,7 +651,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
||||
|
||||
#too small change, takes no snapshots
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||
|
||||
#make big change
|
||||
r=shelltest("dd if=/dev/zero of=/test_source1/fs1/change.txt bs=200000 count=1")
|
||||
@ -621,7 +659,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
||||
|
||||
#bigger change, should take snapshot
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -654,7 +692,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
|
||||
#initial
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -671,12 +709,12 @@ test_target1
|
||||
|
||||
#actual make initial backup
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
|
||||
#test incremental
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --allow-empty --verbose --test".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -712,7 +750,7 @@ test_target1/test_source2/fs2/sub@test-20101111000001
|
||||
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
@ -745,15 +783,15 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
"""test if keep-source=0 and keep-target=0 dont delete common snapshot and break backup"""
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-source=0 --keep-target=0".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0".split(" ")).run())
|
||||
|
||||
#make snapshot, shouldnt delete 0
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||
|
||||
#make snapshot 2, shouldnt delete 0 since it has holds, but will delete 1 since it has no holds
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||
self.assertMultiLineEqual(r, """
|
||||
@ -785,7 +823,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
|
||||
#make another backup but with no-holds. we should naturally endup with only number 3
|
||||
with patch('time.strftime', return_value="20101111000003"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-source=0 --keep-target=0 --no-holds --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0 --no-holds --allow-empty".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||
self.assertMultiLineEqual(r, """
|
||||
@ -815,7 +853,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
||||
|
||||
# make snapshot 4, since we used no-holds, it will delete 3 on the source, breaking the backup
|
||||
with patch('time.strftime', return_value="20101111000004"):
|
||||
self.assertFalse(ZfsAutobackup("test --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||
|
||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||
self.assertMultiLineEqual(r, """
|
||||
@ -842,9 +880,26 @@ test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000003
|
||||
""")
|
||||
|
||||
###########################
|
||||
# TODO:
|
||||
|
||||
def test_raw(self):
|
||||
def test_progress(self):
|
||||
|
||||
self.skipTest("todo: later when travis supports zfs 0.8")
|
||||
r=shelltest("dd if=/dev/zero of=/test_source1/data.txt bs=200000 count=1")
|
||||
r = shelltest("zfs snapshot test_source1@test")
|
||||
|
||||
l=LogConsole(show_verbose=True, show_debug=False, color=False)
|
||||
n=ZfsNode("test",l)
|
||||
d=ZfsDataset(n,"test_source1@test")
|
||||
|
||||
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True)
|
||||
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stderr(buf):
|
||||
try:
|
||||
n.run(["sleep", "2"], inp=sp)
|
||||
except:
|
||||
pass
|
||||
|
||||
print(buf.getvalue())
|
||||
# correct message?
|
||||
self.assertRegex(buf.getvalue(),".*>>> .*minutes left.*")
|
||||
|
||||
74
tests/test_zfsautobackup31.py
Normal file
74
tests/test_zfsautobackup31.py
Normal file
@ -0,0 +1,74 @@
|
||||
from basetest import *
|
||||
import time
|
||||
|
||||
class TestZfsAutobackup31(unittest2.TestCase):
|
||||
"""various new 3.1 features"""
|
||||
|
||||
def setUp(self):
|
||||
prepare_zpools()
|
||||
self.longMessage=True
|
||||
|
||||
def test_no_thinning(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --keep-target=0 --keep-source=0 --no-thinning".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1@test-20101111000001
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source1/fs1/sub@test-20101111000001
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs2/sub@test-20101111000001
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||
""")
|
||||
|
||||
|
||||
def test_re_replication(self):
|
||||
"""test re-replication of something thats already a backup (new in v3.1-beta5)"""
|
||||
|
||||
shelltest("zfs create test_target1/a")
|
||||
shelltest("zfs create test_target1/b")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1/a --no-progress --verbose --debug".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1/b --no-progress --verbose".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t snapshot test_target1")
|
||||
#NOTE: it wont backup test_target1/a/test_source2/fs2/sub to test_target1/b since it doesnt have the zfs_autobackup property anymore.
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_target1/a/test_source1/fs1@test-20101111000000
|
||||
test_target1/a/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/a/test_source2/fs2/sub@test-20101111000000
|
||||
test_target1/b/test_source1/fs1@test-20101111000000
|
||||
test_target1/b/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/b/test_source2/fs2/sub@test-20101111000000
|
||||
test_target1/b/test_target1/a/test_source1/fs1@test-20101111000000
|
||||
test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
|
||||
""")
|
||||
@ -16,7 +16,7 @@ class TestZfsNode(unittest2.TestCase):
|
||||
node=ZfsNode("test", logger, description=description)
|
||||
|
||||
with self.subTest("first snapshot"):
|
||||
node.consistent_snapshot(node.selected_datasets, "test-1",100000)
|
||||
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-1",100000)
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertEqual(r,"""
|
||||
test_source1
|
||||
@ -35,7 +35,7 @@ test_target1
|
||||
|
||||
|
||||
with self.subTest("second snapshot, no changes, no snapshot"):
|
||||
node.consistent_snapshot(node.selected_datasets, "test-2",1)
|
||||
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2",1)
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertEqual(r,"""
|
||||
test_source1
|
||||
@ -53,7 +53,7 @@ test_target1
|
||||
""")
|
||||
|
||||
with self.subTest("second snapshot, no changes, empty snapshot"):
|
||||
node.consistent_snapshot(node.selected_datasets, "test-2",0)
|
||||
node.consistent_snapshot(node.selected_datasets(exclude_paths=[], exclude_received=False), "test-2",0)
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertEqual(r,"""
|
||||
test_source1
|
||||
@ -78,7 +78,7 @@ test_target1
|
||||
logger=LogStub()
|
||||
description="[Source]"
|
||||
node=ZfsNode("test", logger, description=description)
|
||||
s=pformat(node.selected_datasets)
|
||||
s=pformat(node.selected_datasets(exclude_paths=[], exclude_received=False))
|
||||
print(s)
|
||||
|
||||
#basics
|
||||
@ -115,7 +115,7 @@ test_target1
|
||||
def test_supportedrecvoptions(self):
|
||||
logger=LogStub()
|
||||
description="[Source]"
|
||||
#NOTE: this couldnt hang via ssh if we dont close filehandles properly. (which was a previous bug)
|
||||
#NOTE: this could hang via ssh if we dont close filehandles properly. (which was a previous bug)
|
||||
node=ZfsNode("test", logger, description=description, ssh_to='localhost')
|
||||
self.assertIsInstance(node.supported_recv_options, list)
|
||||
|
||||
|
||||
160
zfs_autobackup/CmdPipe.py
Normal file
160
zfs_autobackup/CmdPipe.py
Normal file
@ -0,0 +1,160 @@
|
||||
import subprocess
|
||||
import os
|
||||
import select
|
||||
|
||||
try:
|
||||
from shlex import quote as cmd_quote
|
||||
except ImportError:
|
||||
from pipes import quote as cmd_quote
|
||||
|
||||
|
||||
class CmdItem:
|
||||
"""one command item, to be added to a CmdPipe"""
|
||||
|
||||
def __init__(self, cmd, readonly=False, stderr_handler=None, exit_handler=None, shell=False):
|
||||
"""create item. caller has to make sure cmd is properly escaped when using shell.
|
||||
:type cmd: list of str
|
||||
"""
|
||||
|
||||
self.cmd = cmd
|
||||
self.readonly = readonly
|
||||
self.stderr_handler = stderr_handler
|
||||
self.exit_handler = exit_handler
|
||||
self.shell = shell
|
||||
self.process = None
|
||||
|
||||
def __str__(self):
|
||||
"""return copy-pastable version of command."""
|
||||
if self.shell:
|
||||
# its already copy pastable for a shell:
|
||||
return " ".join(self.cmd)
|
||||
else:
|
||||
# make it copy-pastable, will make a mess of quotes sometimes, but is correct
|
||||
return " ".join(map(cmd_quote, self.cmd))
|
||||
|
||||
def create(self, stdin):
|
||||
"""actually create the subprocess (called by CmdPipe)"""
|
||||
|
||||
# make sure the command gets all the data in utf8 format:
|
||||
# (this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
|
||||
encoded_cmd = []
|
||||
for arg in self.cmd:
|
||||
encoded_cmd.append(arg.encode('utf-8'))
|
||||
|
||||
self.process = subprocess.Popen(encoded_cmd, env=os.environ, stdout=subprocess.PIPE, stdin=stdin,
|
||||
stderr=subprocess.PIPE, shell=self.shell)
|
||||
|
||||
|
||||
class CmdPipe:
|
||||
"""a pipe of one or more commands. also takes care of utf-8 encoding/decoding and line based parsing"""
|
||||
|
||||
def __init__(self, readonly=False, inp=None):
|
||||
"""
|
||||
:param inp: input string for stdin
|
||||
:param readonly: Only execute if entire pipe consist of readonly commands
|
||||
"""
|
||||
# list of commands + error handlers to execute
|
||||
self.items = []
|
||||
|
||||
self.inp = inp
|
||||
self.readonly = readonly
|
||||
self._should_execute = True
|
||||
|
||||
def add(self, cmd_item):
|
||||
"""adds a CmdItem to pipe.
|
||||
:type cmd_item: CmdItem
|
||||
"""
|
||||
|
||||
self.items.append(cmd_item)
|
||||
|
||||
if not cmd_item.readonly and self.readonly:
|
||||
self._should_execute = False
|
||||
|
||||
def __str__(self):
|
||||
"""transform whole pipe into oneliner for debugging and testing. this should generate a copy-pastable string for in a console """
|
||||
|
||||
ret = ""
|
||||
for item in self.items:
|
||||
if ret:
|
||||
ret = ret + " | "
|
||||
ret = ret + "({})".format(item) # this will do proper escaping to make it copypastable
|
||||
|
||||
return ret
|
||||
|
||||
def should_execute(self):
|
||||
return self._should_execute
|
||||
|
||||
def execute(self, stdout_handler):
|
||||
"""run the pipe. returns True all exit handlers returned true"""
|
||||
|
||||
if not self._should_execute:
|
||||
return True
|
||||
|
||||
# first process should have actual user input as stdin:
|
||||
selectors = []
|
||||
|
||||
# create processes
|
||||
last_stdout = None
|
||||
stdin = subprocess.PIPE
|
||||
for item in self.items:
|
||||
|
||||
item.create(stdin)
|
||||
selectors.append(item.process.stderr)
|
||||
|
||||
if last_stdout is None:
|
||||
# we're the first process in the pipe, do we have some input?
|
||||
if self.inp is not None:
|
||||
# TODO: make streaming to support big inputs?
|
||||
item.process.stdin.write(self.inp.encode('utf-8'))
|
||||
item.process.stdin.close()
|
||||
else:
|
||||
# last stdout was piped to this stdin already, so close it because we dont need it anymore
|
||||
last_stdout.close()
|
||||
|
||||
last_stdout = item.process.stdout
|
||||
stdin = last_stdout
|
||||
|
||||
# monitor last stdout as well
|
||||
selectors.append(last_stdout)
|
||||
|
||||
while True:
|
||||
# wait for output on one of the stderrs or last_stdout
|
||||
(read_ready, write_ready, ex_ready) = select.select(selectors, [], [])
|
||||
eof_count = 0
|
||||
done_count = 0
|
||||
|
||||
# read line and call appropriate handlers
|
||||
if last_stdout in read_ready:
|
||||
line = last_stdout.readline().decode('utf-8').rstrip()
|
||||
if line != "":
|
||||
stdout_handler(line)
|
||||
else:
|
||||
eof_count = eof_count + 1
|
||||
|
||||
for item in self.items:
|
||||
if item.process.stderr in read_ready:
|
||||
line = item.process.stderr.readline().decode('utf-8').rstrip()
|
||||
if line != "":
|
||||
item.stderr_handler(line)
|
||||
else:
|
||||
eof_count = eof_count + 1
|
||||
|
||||
if item.process.poll() is not None:
|
||||
done_count = done_count + 1
|
||||
|
||||
# all filehandles are eof and all processes are done (poll() is not None)
|
||||
if eof_count == len(selectors) and done_count == len(self.items):
|
||||
break
|
||||
|
||||
# close filehandles
|
||||
last_stdout.close()
|
||||
for item in self.items:
|
||||
item.process.stderr.close()
|
||||
|
||||
# call exit handlers
|
||||
success = True
|
||||
for item in self.items:
|
||||
if item.exit_handler is not None:
|
||||
success=item.exit_handler(item.process.returncode) and success
|
||||
|
||||
return success
|
||||
@ -1,13 +1,24 @@
|
||||
import os
|
||||
import select
|
||||
import subprocess
|
||||
|
||||
from zfs_autobackup.CmdPipe import CmdPipe, CmdItem
|
||||
from zfs_autobackup.LogStub import LogStub
|
||||
|
||||
try:
|
||||
from shlex import quote as cmd_quote
|
||||
except ImportError:
|
||||
from pipes import quote as cmd_quote
|
||||
|
||||
|
||||
class ExecuteError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ExecuteNode(LogStub):
|
||||
"""an endpoint to execute local or remote commands via ssh"""
|
||||
|
||||
PIPE=1
|
||||
|
||||
def __init__(self, ssh_config=None, ssh_to=None, readonly=False, debug_output=False):
|
||||
"""ssh_config: custom ssh config
|
||||
ssh_to: server you want to ssh to. none means local
|
||||
@ -38,167 +49,111 @@ class ExecuteNode(LogStub):
|
||||
else:
|
||||
self.error("STDERR > " + line.rstrip())
|
||||
|
||||
def _parse_stderr_pipe(self, line, hide_errors):
|
||||
"""parse stderr from pipe input process. can be overridden in subclass"""
|
||||
if hide_errors:
|
||||
self.debug("STDERR|> " + line.rstrip())
|
||||
def _quote(self, cmd):
|
||||
"""return quoted version of command. if it has value PIPE it will add an actual | """
|
||||
if cmd==self.PIPE:
|
||||
return('|')
|
||||
else:
|
||||
self.error("STDERR|> " + line.rstrip())
|
||||
return(cmd_quote(cmd))
|
||||
|
||||
def run(self, cmd, inp=None, tab_split=False, valid_exitcodes=None, readonly=False, hide_errors=False, pipe=False,
|
||||
return_stderr=False):
|
||||
"""run a command on the node.
|
||||
def _shell_cmd(self, cmd):
|
||||
"""prefix specified ssh shell to command and escape shell characters"""
|
||||
|
||||
ret=[]
|
||||
|
||||
#add remote shell
|
||||
if not self.is_local():
|
||||
ret=["ssh"]
|
||||
|
||||
if self.ssh_config is not None:
|
||||
ret.extend(["-F", self.ssh_config])
|
||||
|
||||
ret.append(self.ssh_to)
|
||||
|
||||
ret.append(" ".join(map(self._quote, cmd)))
|
||||
|
||||
return ret
|
||||
|
||||
def is_local(self):
|
||||
return self.ssh_to is None
|
||||
|
||||
def run(self, cmd, inp=None, tab_split=False, valid_exitcodes=None, readonly=False, hide_errors=False,
|
||||
return_stderr=False, pipe=False):
|
||||
"""run a command on the node , checks output and parses/handle output and returns it
|
||||
|
||||
Either uses a local shell (sh -c) or remote shell (ssh) to execute the command. Therefore the command can have stuff like actual pipes in it, if you dont want to use pipe=True to pipe stuff.
|
||||
|
||||
:param cmd: the actual command, should be a list, where the first item is the command
|
||||
and the rest are parameters.
|
||||
:param inp: Can be None, a string or a pipe-handle you got from another run()
|
||||
and the rest are parameters. use ExecuteNode.PIPE to add an unescaped |
|
||||
(if you want to use system piping instead of python piping)
|
||||
:param pipe: return CmdPipe instead of executing it.
|
||||
:param inp: Can be None, a string or a CmdPipe that was previously returned.
|
||||
:param tab_split: split tabbed files in output into a list
|
||||
:param valid_exitcodes: list of valid exit codes for this command (checks exit code of both sides of a pipe)
|
||||
Use [] to accept all exit codes.
|
||||
Use [] to accept all exit codes. Default [0]
|
||||
:param readonly: make this True if the command doesn't make any changes and is safe to execute in testmode
|
||||
:param hide_errors: don't show stderr output as error, instead show it as debugging output (use to hide expected errors)
|
||||
:param pipe: Instead of executing, return a pipe-handle to be used to
|
||||
input to another run() command. (just like a | in linux)
|
||||
:param return_stderr: return both stdout and stderr as a tuple. (normally only returns stdout)
|
||||
|
||||
"""
|
||||
|
||||
if valid_exitcodes is None:
|
||||
valid_exitcodes = [0]
|
||||
|
||||
encoded_cmd = []
|
||||
|
||||
# use ssh?
|
||||
if self.ssh_to is not None:
|
||||
encoded_cmd.append("ssh".encode('utf-8'))
|
||||
|
||||
if self.ssh_config is not None:
|
||||
encoded_cmd.extend(["-F".encode('utf-8'), self.ssh_config.encode('utf-8')])
|
||||
|
||||
encoded_cmd.append(self.ssh_to.encode('utf-8'))
|
||||
|
||||
# make sure the command gets all the data in utf8 format:
|
||||
# (this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
|
||||
for arg in cmd:
|
||||
# add single quotes for remote commands to support spaces and other weird stuff (remote commands are
|
||||
# executed in a shell) and escape existing single quotes (bash needs ' to end the quoted string,
|
||||
# then a \' for the actual quote and then another ' to start a new quoted string) (and then python
|
||||
# needs the double \ to get a single \)
|
||||
encoded_cmd.append(("'" + arg.replace("'", "'\\''") + "'").encode('utf-8'))
|
||||
|
||||
# create new pipe?
|
||||
if not isinstance(inp, CmdPipe):
|
||||
cmd_pipe = CmdPipe(self.readonly, inp)
|
||||
else:
|
||||
for arg in cmd:
|
||||
encoded_cmd.append(arg.encode('utf-8'))
|
||||
# add stuff to existing pipe
|
||||
cmd_pipe = inp
|
||||
|
||||
# debug and test stuff
|
||||
debug_txt = ""
|
||||
for c in encoded_cmd:
|
||||
debug_txt = debug_txt + " " + c.decode()
|
||||
|
||||
if pipe:
|
||||
debug_txt = debug_txt + " |"
|
||||
|
||||
if self.readonly and not readonly:
|
||||
self.debug("SKIP > " + debug_txt)
|
||||
else:
|
||||
if pipe:
|
||||
self.debug("PIPE > " + debug_txt)
|
||||
else:
|
||||
self.debug("RUN > " + debug_txt)
|
||||
|
||||
# determine stdin
|
||||
if inp is None:
|
||||
# NOTE: Not None, otherwise it reads stdin from terminal!
|
||||
stdin = subprocess.PIPE
|
||||
elif isinstance(inp, str) or type(inp) == 'unicode':
|
||||
self.debug("INPUT > \n" + inp.rstrip())
|
||||
stdin = subprocess.PIPE
|
||||
elif isinstance(inp, subprocess.Popen):
|
||||
self.debug("Piping input")
|
||||
stdin = inp.stdout
|
||||
else:
|
||||
raise (Exception("Program error: Incompatible input"))
|
||||
|
||||
if self.readonly and not readonly:
|
||||
# todo: what happens if input is piped?
|
||||
return
|
||||
|
||||
# execute and parse/return results
|
||||
p = subprocess.Popen(encoded_cmd, env=os.environ, stdout=subprocess.PIPE, stdin=stdin, stderr=subprocess.PIPE)
|
||||
|
||||
# Note: make streaming?
|
||||
if isinstance(inp, str) or type(inp) == 'unicode':
|
||||
p.stdin.write(inp.encode('utf-8'))
|
||||
|
||||
if p.stdin:
|
||||
p.stdin.close()
|
||||
|
||||
# return pipe
|
||||
if pipe:
|
||||
return p
|
||||
|
||||
# handle all outputs
|
||||
if isinstance(inp, subprocess.Popen):
|
||||
selectors = [p.stdout, p.stderr, inp.stderr]
|
||||
inp.stdout.close() # otherwise inputprocess wont exit when ours does
|
||||
else:
|
||||
selectors = [p.stdout, p.stderr]
|
||||
|
||||
output_lines = []
|
||||
# stderr parser
|
||||
error_lines = []
|
||||
while True:
|
||||
(read_ready, write_ready, ex_ready) = select.select(selectors, [], [])
|
||||
eof_count = 0
|
||||
if p.stdout in read_ready:
|
||||
line = p.stdout.readline().decode('utf-8')
|
||||
if line != "":
|
||||
if tab_split:
|
||||
output_lines.append(line.rstrip().split('\t'))
|
||||
else:
|
||||
output_lines.append(line.rstrip())
|
||||
self._parse_stdout(line)
|
||||
else:
|
||||
eof_count = eof_count + 1
|
||||
if p.stderr in read_ready:
|
||||
line = p.stderr.readline().decode('utf-8')
|
||||
if line != "":
|
||||
|
||||
def stderr_handler(line):
|
||||
if tab_split:
|
||||
error_lines.append(line.rstrip().split('\t'))
|
||||
else:
|
||||
error_lines.append(line.rstrip())
|
||||
self._parse_stderr(line, hide_errors)
|
||||
else:
|
||||
eof_count = eof_count + 1
|
||||
if isinstance(inp, subprocess.Popen) and (inp.stderr in read_ready):
|
||||
line = inp.stderr.readline().decode('utf-8')
|
||||
if line != "":
|
||||
self._parse_stderr_pipe(line, hide_errors)
|
||||
else:
|
||||
eof_count = eof_count + 1
|
||||
|
||||
# stop if both processes are done and all filehandles are EOF:
|
||||
if (p.poll() is not None) and (
|
||||
(not isinstance(inp, subprocess.Popen)) or inp.poll() is not None) and eof_count == len(selectors):
|
||||
break
|
||||
|
||||
p.stderr.close()
|
||||
p.stdout.close()
|
||||
# exit code hanlder
|
||||
if valid_exitcodes is None:
|
||||
valid_exitcodes = [0]
|
||||
|
||||
def exit_handler(exit_code):
|
||||
if self.debug_output:
|
||||
self.debug("EXIT > {}".format(p.returncode))
|
||||
self.debug("EXIT > {}".format(exit_code))
|
||||
|
||||
# handle piped process error output and exit codes
|
||||
if isinstance(inp, subprocess.Popen):
|
||||
inp.stderr.close()
|
||||
inp.stdout.close()
|
||||
if (valid_exitcodes != []) and (exit_code not in valid_exitcodes):
|
||||
self.error("Command \"{}\" returned exit code {} (valid codes: {})".format(cmd_item, exit_code, valid_exitcodes))
|
||||
return False
|
||||
|
||||
if self.debug_output:
|
||||
self.debug("EXIT |> {}".format(inp.returncode))
|
||||
if valid_exitcodes and inp.returncode not in valid_exitcodes:
|
||||
raise (subprocess.CalledProcessError(inp.returncode, "(pipe)"))
|
||||
return True
|
||||
|
||||
if valid_exitcodes and p.returncode not in valid_exitcodes:
|
||||
raise (subprocess.CalledProcessError(p.returncode, encoded_cmd))
|
||||
# add shell command and handlers to pipe
|
||||
cmd_item=CmdItem(cmd=self._shell_cmd(cmd), readonly=readonly, stderr_handler=stderr_handler, exit_handler=exit_handler, shell=self.is_local())
|
||||
cmd_pipe.add(cmd_item)
|
||||
|
||||
# return pipe instead of executing?
|
||||
if pipe:
|
||||
return cmd_pipe
|
||||
|
||||
# stdout parser
|
||||
output_lines = []
|
||||
|
||||
def stdout_handler(line):
|
||||
if tab_split:
|
||||
output_lines.append(line.rstrip().split('\t'))
|
||||
else:
|
||||
output_lines.append(line.rstrip())
|
||||
self._parse_stdout(line)
|
||||
|
||||
if cmd_pipe.should_execute():
|
||||
self.debug("CMD > {}".format(cmd_pipe))
|
||||
else:
|
||||
self.debug("CMDSKIP> {}".format(cmd_pipe))
|
||||
|
||||
# execute and calls handlers in CmdPipe
|
||||
if not cmd_pipe.execute(stdout_handler=stdout_handler):
|
||||
raise(ExecuteError("Last command returned error"))
|
||||
|
||||
if return_stderr:
|
||||
return output_lines, error_lines
|
||||
|
||||
@ -3,35 +3,44 @@ from __future__ import print_function
|
||||
|
||||
import sys
|
||||
|
||||
|
||||
colorama = False
|
||||
if sys.stdout.isatty():
|
||||
try:
|
||||
import colorama
|
||||
except ImportError:
|
||||
colorama = False
|
||||
pass
|
||||
|
||||
|
||||
class LogConsole:
|
||||
"""Log-class that outputs to console, adding colors if needed"""
|
||||
|
||||
def __init__(self, show_debug=False, show_verbose=False):
|
||||
def __init__(self, show_debug, show_verbose, color):
|
||||
self.last_log = ""
|
||||
self.show_debug = show_debug
|
||||
self.show_verbose = show_verbose
|
||||
|
||||
@staticmethod
|
||||
def error(txt):
|
||||
if colorama:
|
||||
if color:
|
||||
# try to use color, failback if colorama not available
|
||||
self.colorama=False
|
||||
try:
|
||||
import colorama
|
||||
global colorama
|
||||
self.colorama = True
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
else:
|
||||
self.colorama=False
|
||||
|
||||
def error(self, txt):
|
||||
if self.colorama:
|
||||
print(colorama.Fore.RED + colorama.Style.BRIGHT + "! " + txt + colorama.Style.RESET_ALL, file=sys.stderr)
|
||||
else:
|
||||
print("! " + txt, file=sys.stderr)
|
||||
sys.stderr.flush()
|
||||
|
||||
def warning(self, txt):
|
||||
if self.colorama:
|
||||
print(colorama.Fore.YELLOW + colorama.Style.BRIGHT + " NOTE: " + txt + colorama.Style.RESET_ALL)
|
||||
else:
|
||||
print(" NOTE: " + txt)
|
||||
sys.stdout.flush()
|
||||
|
||||
def verbose(self, txt):
|
||||
if self.show_verbose:
|
||||
if colorama:
|
||||
if self.colorama:
|
||||
print(colorama.Style.NORMAL + " " + txt + colorama.Style.RESET_ALL)
|
||||
else:
|
||||
print(" " + txt)
|
||||
@ -39,8 +48,19 @@ class LogConsole:
|
||||
|
||||
def debug(self, txt):
|
||||
if self.show_debug:
|
||||
if colorama:
|
||||
if self.colorama:
|
||||
print(colorama.Fore.GREEN + "# " + txt + colorama.Style.RESET_ALL)
|
||||
else:
|
||||
print("# " + txt)
|
||||
sys.stdout.flush()
|
||||
|
||||
def progress(self, txt):
|
||||
"""print progress output to stderr (stays on same line)"""
|
||||
self.clear_progress()
|
||||
print(">>> {}\r".format(txt), end='', file=sys.stderr)
|
||||
sys.stderr.flush()
|
||||
|
||||
def clear_progress(self):
|
||||
import colorama
|
||||
print(colorama.ansi.clear_line(), end='', file=sys.stderr)
|
||||
sys.stderr.flush()
|
||||
|
||||
@ -11,5 +11,8 @@ class LogStub:
|
||||
def verbose(self, txt):
|
||||
print("VERBOSE: " + txt)
|
||||
|
||||
def warning(self, txt):
|
||||
print("WARNING: " + txt)
|
||||
|
||||
def error(self, txt):
|
||||
print("ERROR : " + txt)
|
||||
@ -7,8 +7,9 @@ class Thinner:
|
||||
"""progressive thinner (universal, used for cleaning up snapshots)"""
|
||||
|
||||
def __init__(self, schedule_str=""):
|
||||
"""schedule_str: comma seperated list of ThinnerRules. A plain number specifies how many snapshots to always
|
||||
keep.
|
||||
"""
|
||||
Args:
|
||||
schedule_str: comma seperated list of ThinnerRules. A plain number specifies how many snapshots to always keep.
|
||||
"""
|
||||
|
||||
self.rules = []
|
||||
@ -19,7 +20,7 @@ class Thinner:
|
||||
|
||||
rule_strs = schedule_str.split(",")
|
||||
for rule_str in rule_strs:
|
||||
if rule_str.isdigit():
|
||||
if rule_str.lstrip('-').isdigit():
|
||||
self.always_keep = int(rule_str)
|
||||
if self.always_keep < 0:
|
||||
raise (Exception("Number of snapshots to keep cant be negative: {}".format(self.always_keep)))
|
||||
@ -37,11 +38,15 @@ class Thinner:
|
||||
return ret
|
||||
|
||||
def thin(self, objects, keep_objects=None, now=None):
|
||||
"""thin list of objects with current schedule rules. objects: list of objects to thin. every object should
|
||||
have timestamp attribute. keep_objects: objects to always keep (these should also be in normal objects list,
|
||||
so we can use them to perhaps delete other obsolete objects)
|
||||
"""thin list of objects with current schedule rules. objects: list of
|
||||
objects to thin. every object should have timestamp attribute.
|
||||
|
||||
return( keeps, removes )
|
||||
|
||||
Args:
|
||||
objects: list of objects to check (should have a timestamp attribute)
|
||||
keep_objects: objects to always keep (if they also are in the in the normal objects list)
|
||||
now: if specified, use this time as current time
|
||||
"""
|
||||
|
||||
if not keep_objects:
|
||||
|
||||
@ -39,6 +39,9 @@ class ThinnerRule:
|
||||
rule_str = rule_str.lower()
|
||||
matches = re.findall("([0-9]*)([a-z]*)([0-9]*)([a-z]*)", rule_str)[0]
|
||||
|
||||
if '' in matches:
|
||||
raise (Exception("Invalid schedule string: '{}'".format(rule_str)))
|
||||
|
||||
period_amount = int(matches[0])
|
||||
period_unit = matches[1]
|
||||
ttl_amount = int(matches[2])
|
||||
|
||||
@ -2,6 +2,8 @@ import argparse
|
||||
import sys
|
||||
import time
|
||||
|
||||
from zfs_autobackup import compressors
|
||||
from zfs_autobackup.ExecuteNode import ExecuteNode
|
||||
from zfs_autobackup.Thinner import Thinner
|
||||
from zfs_autobackup.ZfsDataset import ZfsDataset
|
||||
from zfs_autobackup.LogConsole import LogConsole
|
||||
@ -9,11 +11,13 @@ from zfs_autobackup.ZfsNode import ZfsNode
|
||||
from zfs_autobackup.ThinnerRule import ThinnerRule
|
||||
|
||||
|
||||
|
||||
|
||||
class ZfsAutobackup:
|
||||
"""main class"""
|
||||
|
||||
VERSION = "3.0.1-beta8"
|
||||
HEADER = "zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)".format(VERSION)
|
||||
VERSION = "3.1-beta6"
|
||||
HEADER = "zfs-autobackup v{} - (c)2021 E.H.Eefting (edwin@datux.nl)".format(VERSION)
|
||||
|
||||
def __init__(self, argv, print_arguments=True):
|
||||
|
||||
@ -24,14 +28,14 @@ class ZfsAutobackup:
|
||||
parser = argparse.ArgumentParser(
|
||||
description=self.HEADER,
|
||||
epilog='Full manual at: https://github.com/psy0rz/zfs_autobackup')
|
||||
parser.add_argument('--ssh-config', default=None, help='Custom ssh client config')
|
||||
parser.add_argument('--ssh-source', default=None,
|
||||
help='Source host to get backup from. (user@hostname) Default %(default)s.')
|
||||
parser.add_argument('--ssh-target', default=None,
|
||||
help='Target host to push backup to. (user@hostname) Default %(default)s.')
|
||||
parser.add_argument('--keep-source', type=str, default="10,1d1w,1w1m,1m1y",
|
||||
parser.add_argument('--ssh-config', metavar='CONFIG-FILE', default=None, help='Custom ssh client config')
|
||||
parser.add_argument('--ssh-source', metavar='USER@HOST', default=None,
|
||||
help='Source host to get backup from.')
|
||||
parser.add_argument('--ssh-target', metavar='USER@HOST', default=None,
|
||||
help='Target host to push backup to.')
|
||||
parser.add_argument('--keep-source', metavar='SCHEDULE', type=str, default="10,1d1w,1w1m,1m1y",
|
||||
help='Thinning schedule for old source snapshots. Default: %(default)s')
|
||||
parser.add_argument('--keep-target', type=str, default="10,1d1w,1w1m,1m1y",
|
||||
parser.add_argument('--keep-target', metavar='SCHEDULE', type=str, default="10,1d1w,1w1m,1m1y",
|
||||
help='Thinning schedule for old target snapshots. Default: %(default)s')
|
||||
|
||||
parser.add_argument('backup_name', metavar='backup-name',
|
||||
@ -47,8 +51,10 @@ class ZfsAutobackup:
|
||||
help='Don\'t create new snapshots (useful for finishing uncompleted backups, or cleanups)')
|
||||
parser.add_argument('--no-send', action='store_true',
|
||||
help='Don\'t send snapshots (useful for cleanups, or if you want a serperate send-cronjob)')
|
||||
# parser.add_argument('--no-thinning', action='store_true', help='Don\'t run the thinner.')
|
||||
parser.add_argument('--min-change', type=int, default=1,
|
||||
parser.add_argument('--no-thinning', action='store_true', help="Do not destroy any snapshots.")
|
||||
parser.add_argument('--no-holds', action='store_true',
|
||||
help='Don\'t hold snapshots. (Faster. Allows you to destroy common snapshot.)')
|
||||
parser.add_argument('--min-change', metavar='BYTES', type=int, default=1,
|
||||
help='Number of bytes written after which we consider a dataset changed (default %('
|
||||
'default)s)')
|
||||
parser.add_argument('--allow-empty', action='store_true',
|
||||
@ -56,11 +62,8 @@ class ZfsAutobackup:
|
||||
parser.add_argument('--ignore-replicated', action='store_true',
|
||||
help='Ignore datasets that seem to be replicated some other way. (No changes since '
|
||||
'lastest snapshot. Useful for proxmox HA replication)')
|
||||
parser.add_argument('--no-holds', action='store_true',
|
||||
help='Don\'t hold snapshots. (Faster)')
|
||||
|
||||
parser.add_argument('--resume', action='store_true', help=argparse.SUPPRESS)
|
||||
parser.add_argument('--strip-path', default=0, type=int,
|
||||
parser.add_argument('--strip-path', metavar='N', default=0, type=int,
|
||||
help='Number of directories to strip from target path (use 1 when cloning zones between 2 '
|
||||
'SmartOS machines)')
|
||||
# parser.add_argument('--buffer', default="", help='Use mbuffer with specified size to speedup zfs transfer.
|
||||
@ -72,10 +75,10 @@ class ZfsAutobackup:
|
||||
parser.add_argument('--clear-mountpoint', action='store_true',
|
||||
help='Set property canmount=noauto for new datasets. (recommended, prevents mount '
|
||||
'conflicts. same as --set-properties canmount=noauto)')
|
||||
parser.add_argument('--filter-properties', type=str,
|
||||
parser.add_argument('--filter-properties', metavar='PROPERY,...', type=str,
|
||||
help='List of properties to "filter" when receiving filesystems. (you can still restore '
|
||||
'them with zfs inherit -S)')
|
||||
parser.add_argument('--set-properties', type=str,
|
||||
parser.add_argument('--set-properties', metavar='PROPERTY=VALUE,...', type=str,
|
||||
help='List of propererties to override when receiving filesystems. (you can still restore '
|
||||
'them with zfs inherit -S)')
|
||||
parser.add_argument('--rollback', action='store_true',
|
||||
@ -83,14 +86,18 @@ class ZfsAutobackup:
|
||||
'prevent changes by setting the readonly property on the target_path to on)')
|
||||
parser.add_argument('--destroy-incompatible', action='store_true',
|
||||
help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
|
||||
parser.add_argument('--destroy-missing', type=str, default=None,
|
||||
parser.add_argument('--destroy-missing', metavar="SCHEDULE", type=str, default=None,
|
||||
help='Destroy datasets on target that are missing on the source. Specify the time since '
|
||||
'the last snapshot, e.g: --destroy-missing 30d')
|
||||
parser.add_argument('--ignore-transfer-errors', action='store_true',
|
||||
help='Ignore transfer errors (still checks if received filesystem exists. useful for '
|
||||
'acltype errors)')
|
||||
parser.add_argument('--raw', action='store_true',
|
||||
help='For encrypted datasets, send data exactly as it exists on disk.')
|
||||
|
||||
parser.add_argument('--decrypt', action='store_true',
|
||||
help='Decrypt data before sending it over.')
|
||||
|
||||
parser.add_argument('--encrypt', action='store_true',
|
||||
help='Encrypt data after receiving it.')
|
||||
|
||||
parser.add_argument('--test', action='store_true',
|
||||
help='dont change anything, just show what would be done (still does all read-only '
|
||||
@ -102,7 +109,24 @@ class ZfsAutobackup:
|
||||
help='Show zfs commands and their output/exit codes. (noisy)')
|
||||
parser.add_argument('--progress', action='store_true',
|
||||
help='show zfs progress output. Enabled automaticly on ttys. (use --no-progress to disable)')
|
||||
parser.add_argument('--no-progress', action='store_true', help=argparse.SUPPRESS) #needed to workaround a zfs recv -v bug
|
||||
parser.add_argument('--no-progress', action='store_true',
|
||||
help=argparse.SUPPRESS) # needed to workaround a zfs recv -v bug
|
||||
|
||||
parser.add_argument('--resume', action='store_true', help=argparse.SUPPRESS)
|
||||
parser.add_argument('--raw', action='store_true', help=argparse.SUPPRESS)
|
||||
parser.add_argument('--exclude-received', action='store_true',
|
||||
help=argparse.SUPPRESS) # probably never needed anymore
|
||||
|
||||
#these things all do stuff by piping zfs send/recv IO
|
||||
parser.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
|
||||
help='pipe zfs send output through COMMAND (can be used multiple times)')
|
||||
parser.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
|
||||
help='pipe zfs recv input through COMMAND (can be used multiple times)')
|
||||
parser.add_argument('--compress', metavar='TYPE', default=None, choices=compressors.choices(), help='Use compression during transfer, zstd-fast recommended. ({})'.format(", ".join(compressors.choices())))
|
||||
parser.add_argument('--rate', metavar='DATARATE', default=None, help='Limit data transfer rate (e.g. 128K. requires mbuffer.)')
|
||||
parser.add_argument('--buffer', metavar='SIZE', default=None, help='Add zfs send and recv buffers to smooth out IO bursts. (e.g. 128M. requires mbuffer)')
|
||||
|
||||
|
||||
|
||||
# note args is the only global variable we use, since its a global readonly setting anyway
|
||||
args = parser.parse_args(argv)
|
||||
@ -125,18 +149,29 @@ class ZfsAutobackup:
|
||||
if args.destroy_incompatible:
|
||||
args.rollback = True
|
||||
|
||||
self.log = LogConsole(show_debug=self.args.debug, show_verbose=self.args.verbose)
|
||||
self.log = LogConsole(show_debug=self.args.debug, show_verbose=self.args.verbose, color=sys.stdout.isatty())
|
||||
self.verbose(self.HEADER)
|
||||
|
||||
if args.resume:
|
||||
self.verbose("NOTE: The --resume option isn't needed anymore (its autodetected now)")
|
||||
self.warning("The --resume option isn't needed anymore (its autodetected now)")
|
||||
|
||||
if args.raw:
|
||||
self.warning(
|
||||
"The --raw option isn't needed anymore (its autodetected now). Also see --encrypt and --decrypt.")
|
||||
|
||||
if args.target_path is not None and args.target_path[0] == "/":
|
||||
self.log.error("Target should not start with a /")
|
||||
sys.exit(255)
|
||||
|
||||
if args.compress and not args.ssh_source and not args.ssh_target:
|
||||
self.warning("Using compression, but transfer is local.")
|
||||
|
||||
def verbose(self, txt):
|
||||
self.log.verbose(txt)
|
||||
|
||||
def warning(self, txt):
|
||||
self.log.warning(txt)
|
||||
|
||||
def error(self, txt):
|
||||
self.log.error(txt)
|
||||
|
||||
@ -147,107 +182,58 @@ class ZfsAutobackup:
|
||||
self.log.verbose("")
|
||||
self.log.verbose("#### " + title)
|
||||
|
||||
# sync datasets, or thin-only on both sides
|
||||
# target is needed for this.
|
||||
def sync_datasets(self, source_node, source_datasets):
|
||||
def progress(self, txt):
|
||||
self.log.progress(txt)
|
||||
|
||||
description = "[Target]"
|
||||
|
||||
self.set_title("Target settings")
|
||||
|
||||
target_thinner = Thinner(self.args.keep_target)
|
||||
target_node = ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_target,
|
||||
readonly=self.args.test, debug_output=self.args.debug_output, description=description,
|
||||
thinner=target_thinner)
|
||||
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
||||
|
||||
if self.args.no_send:
|
||||
self.set_title("Thinning source and target")
|
||||
else:
|
||||
self.set_title("Sending and thinning")
|
||||
|
||||
# check if exists, to prevent vague errors
|
||||
target_dataset = ZfsDataset(target_node, self.args.target_path)
|
||||
if not target_dataset.exists:
|
||||
self.error("Target path '{}' does not exist. Please create this dataset first.".format(target_dataset))
|
||||
return 255
|
||||
|
||||
if self.args.filter_properties:
|
||||
filter_properties = self.args.filter_properties.split(",")
|
||||
else:
|
||||
filter_properties = []
|
||||
|
||||
if self.args.set_properties:
|
||||
set_properties = self.args.set_properties.split(",")
|
||||
else:
|
||||
set_properties = []
|
||||
|
||||
if self.args.clear_refreservation:
|
||||
filter_properties.append("refreservation")
|
||||
|
||||
if self.args.clear_mountpoint:
|
||||
set_properties.append("canmount=noauto")
|
||||
|
||||
# sync datasets
|
||||
fail_count = 0
|
||||
target_datasets = []
|
||||
for source_dataset in source_datasets:
|
||||
|
||||
try:
|
||||
# determine corresponding target_dataset
|
||||
target_name = self.args.target_path + "/" + source_dataset.lstrip_path(self.args.strip_path)
|
||||
target_dataset = ZfsDataset(target_node, target_name)
|
||||
target_datasets.append(target_dataset)
|
||||
|
||||
# ensure parents exists
|
||||
# TODO: this isnt perfect yet, in some cases it can create parents when it shouldn't.
|
||||
if not self.args.no_send \
|
||||
and target_dataset.parent not in target_datasets \
|
||||
and not target_dataset.parent.exists:
|
||||
target_dataset.parent.create_filesystem(parents=True)
|
||||
|
||||
# determine common zpool features
|
||||
source_features = source_node.get_zfs_pool(source_dataset.split_path()[0]).features
|
||||
target_features = target_node.get_zfs_pool(target_dataset.split_path()[0]).features
|
||||
common_features = source_features and target_features
|
||||
# source_dataset.debug("Common features: {}".format(common_features))
|
||||
|
||||
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress,
|
||||
features=common_features, filter_properties=filter_properties,
|
||||
set_properties=set_properties,
|
||||
ignore_recv_exit_code=self.args.ignore_transfer_errors,
|
||||
holds=not self.args.no_holds, rollback=self.args.rollback,
|
||||
raw=self.args.raw, other_snapshots=self.args.other_snapshots,
|
||||
no_send=self.args.no_send,
|
||||
destroy_incompatible=self.args.destroy_incompatible)
|
||||
except Exception as e:
|
||||
fail_count = fail_count + 1
|
||||
source_dataset.error("FAILED: " + str(e))
|
||||
if self.args.debug:
|
||||
raise
|
||||
|
||||
# if not self.args.no_thinning:
|
||||
self.thin_missing_targets(ZfsDataset(target_node, self.args.target_path), target_datasets)
|
||||
|
||||
return fail_count
|
||||
def clear_progress(self):
|
||||
self.log.clear_progress()
|
||||
|
||||
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
||||
def thin_missing_targets(self, target_dataset, used_target_datasets):
|
||||
"""thin/destroy target datasets that are missing on the source."""
|
||||
"""thin target datasets that are missing on the source."""
|
||||
|
||||
self.debug("Thinning obsolete datasets")
|
||||
missing_datasets = [dataset for dataset in target_dataset.recursive_datasets if
|
||||
dataset not in used_target_datasets]
|
||||
|
||||
count = 0
|
||||
for dataset in missing_datasets:
|
||||
|
||||
count = count + 1
|
||||
if self.args.progress:
|
||||
self.progress("Analysing missing {}/{}".format(count, len(missing_datasets)))
|
||||
|
||||
for dataset in target_dataset.recursive_datasets:
|
||||
try:
|
||||
if dataset not in used_target_datasets:
|
||||
dataset.debug("Missing on source, thinning")
|
||||
dataset.thin()
|
||||
|
||||
# destroy_missing enabled?
|
||||
if self.args.destroy_missing is not None:
|
||||
except Exception as e:
|
||||
dataset.error("Error during thinning of missing datasets ({})".format(str(e)))
|
||||
|
||||
if self.args.progress:
|
||||
self.clear_progress()
|
||||
|
||||
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
||||
def destroy_missing_targets(self, target_dataset, used_target_datasets):
|
||||
"""destroy target datasets that are missing on the source and that meet the requirements"""
|
||||
|
||||
self.debug("Destroying obsolete datasets")
|
||||
|
||||
missing_datasets = [dataset for dataset in target_dataset.recursive_datasets if
|
||||
dataset not in used_target_datasets]
|
||||
|
||||
count = 0
|
||||
for dataset in missing_datasets:
|
||||
|
||||
count = count + 1
|
||||
if self.args.progress:
|
||||
self.progress("Analysing destroy missing {}/{}".format(count, len(missing_datasets)))
|
||||
|
||||
try:
|
||||
# cant do anything without our own snapshots
|
||||
if not dataset.our_snapshots:
|
||||
if dataset.datasets:
|
||||
# its not a leaf, just ignore
|
||||
dataset.debug("Destroy missing: ignoring")
|
||||
else:
|
||||
dataset.verbose(
|
||||
@ -284,7 +270,128 @@ class ZfsAutobackup:
|
||||
dataset.destroy(fail_exception=True)
|
||||
|
||||
except Exception as e:
|
||||
dataset.error("Error during destoy missing ({})".format(str(e)))
|
||||
dataset.error("Error during --destroy-missing: {}".format(str(e)))
|
||||
|
||||
if self.args.progress:
|
||||
self.clear_progress()
|
||||
|
||||
def get_send_pipes(self, logger):
|
||||
"""determine the zfs send pipe"""
|
||||
|
||||
ret=[]
|
||||
|
||||
# IO buffer
|
||||
if self.args.buffer:
|
||||
logger("zfs send buffer : {}".format(self.args.buffer))
|
||||
ret.extend([ ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m"+self.args.buffer ])
|
||||
|
||||
# custom pipes
|
||||
for send_pipe in self.args.send_pipe:
|
||||
ret.append(ExecuteNode.PIPE)
|
||||
ret.extend(send_pipe.split(" "))
|
||||
logger("zfs send custom pipe : {}".format(send_pipe))
|
||||
|
||||
# compression
|
||||
if self.args.compress!=None:
|
||||
ret.append(ExecuteNode.PIPE)
|
||||
cmd=compressors.compress_cmd(self.args.compress)
|
||||
ret.extend(cmd)
|
||||
logger("zfs send compression : {}".format(" ".join(cmd)))
|
||||
|
||||
# transfer rate
|
||||
if self.args.rate:
|
||||
logger("zfs send transfer rate : {}".format(self.args.rate))
|
||||
ret.extend([ ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m16M", "-R"+self.args.rate ])
|
||||
|
||||
|
||||
return ret
|
||||
|
||||
def get_recv_pipes(self, logger):
|
||||
|
||||
ret=[]
|
||||
|
||||
# decompression
|
||||
if self.args.compress!=None:
|
||||
cmd=compressors.decompress_cmd(self.args.compress)
|
||||
ret.extend(cmd)
|
||||
ret.append(ExecuteNode.PIPE)
|
||||
logger("zfs recv decompression : {}".format(" ".join(cmd)))
|
||||
|
||||
# custom pipes
|
||||
for recv_pipe in self.args.recv_pipe:
|
||||
ret.extend(recv_pipe.split(" "))
|
||||
ret.append(ExecuteNode.PIPE)
|
||||
logger("zfs recv custom pipe : {}".format(recv_pipe))
|
||||
|
||||
return ret
|
||||
|
||||
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
||||
def sync_datasets(self, source_node, source_datasets, target_node):
|
||||
"""Sync datasets, or thin-only on both sides
|
||||
:type target_node: ZfsNode
|
||||
:type source_datasets: list of ZfsDataset
|
||||
:type source_node: ZfsNode
|
||||
"""
|
||||
|
||||
send_pipes=self.get_send_pipes(source_node.verbose)
|
||||
recv_pipes=self.get_recv_pipes(target_node.verbose)
|
||||
|
||||
fail_count = 0
|
||||
count = 0
|
||||
target_datasets = []
|
||||
for source_dataset in source_datasets:
|
||||
|
||||
# stats
|
||||
if self.args.progress:
|
||||
count = count + 1
|
||||
self.progress("Analysing dataset {}/{} ({} failed)".format(count, len(source_datasets), fail_count))
|
||||
|
||||
try:
|
||||
# determine corresponding target_dataset
|
||||
target_name = self.args.target_path + "/" + source_dataset.lstrip_path(self.args.strip_path)
|
||||
target_dataset = ZfsDataset(target_node, target_name)
|
||||
target_datasets.append(target_dataset)
|
||||
|
||||
# ensure parents exists
|
||||
# TODO: this isnt perfect yet, in some cases it can create parents when it shouldn't.
|
||||
if not self.args.no_send \
|
||||
and target_dataset.parent not in target_datasets \
|
||||
and not target_dataset.parent.exists:
|
||||
target_dataset.parent.create_filesystem(parents=True)
|
||||
|
||||
# determine common zpool features (cached, so no problem we call it often)
|
||||
source_features = source_node.get_zfs_pool(source_dataset.split_path()[0]).features
|
||||
target_features = target_node.get_zfs_pool(target_dataset.split_path()[0]).features
|
||||
common_features = source_features and target_features
|
||||
|
||||
# sync the snapshots of this dataset
|
||||
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress,
|
||||
features=common_features, filter_properties=self.filter_properties_list(),
|
||||
set_properties=self.set_properties_list(),
|
||||
ignore_recv_exit_code=self.args.ignore_transfer_errors,
|
||||
holds=not self.args.no_holds, rollback=self.args.rollback,
|
||||
also_other_snapshots=self.args.other_snapshots,
|
||||
no_send=self.args.no_send,
|
||||
destroy_incompatible=self.args.destroy_incompatible,
|
||||
send_pipes=send_pipes, recv_pipes=recv_pipes,
|
||||
decrypt=self.args.decrypt, encrypt=self.args.encrypt, )
|
||||
except Exception as e:
|
||||
fail_count = fail_count + 1
|
||||
source_dataset.error("FAILED: " + str(e))
|
||||
if self.args.debug:
|
||||
raise
|
||||
|
||||
if self.args.progress:
|
||||
self.clear_progress()
|
||||
|
||||
target_path_dataset = ZfsDataset(target_node, self.args.target_path)
|
||||
if not self.args.no_thinning:
|
||||
self.thin_missing_targets(target_dataset=target_path_dataset, used_target_datasets=target_datasets)
|
||||
|
||||
if self.args.destroy_missing is not None:
|
||||
self.destroy_missing_targets(target_dataset=target_path_dataset, used_target_datasets=target_datasets)
|
||||
|
||||
return fail_count
|
||||
|
||||
def thin_source(self, source_datasets):
|
||||
|
||||
@ -293,17 +400,58 @@ class ZfsAutobackup:
|
||||
for source_dataset in source_datasets:
|
||||
source_dataset.thin(skip_holds=True)
|
||||
|
||||
def filter_replicated(self, datasets):
|
||||
if not self.args.ignore_replicated:
|
||||
return datasets
|
||||
else:
|
||||
self.set_title("Filtering already replicated filesystems")
|
||||
ret = []
|
||||
for dataset in datasets:
|
||||
if dataset.is_changed(self.args.min_change):
|
||||
ret.append(dataset)
|
||||
else:
|
||||
dataset.verbose("Ignoring, already replicated")
|
||||
|
||||
return ret
|
||||
|
||||
def filter_properties_list(self):
|
||||
|
||||
if self.args.filter_properties:
|
||||
filter_properties = self.args.filter_properties.split(",")
|
||||
else:
|
||||
filter_properties = []
|
||||
|
||||
if self.args.clear_refreservation:
|
||||
filter_properties.append("refreservation")
|
||||
|
||||
return filter_properties
|
||||
|
||||
def set_properties_list(self):
|
||||
|
||||
if self.args.set_properties:
|
||||
set_properties = self.args.set_properties.split(",")
|
||||
else:
|
||||
set_properties = []
|
||||
|
||||
if self.args.clear_mountpoint:
|
||||
set_properties.append("canmount=noauto")
|
||||
|
||||
return set_properties
|
||||
|
||||
def run(self):
|
||||
|
||||
try:
|
||||
self.verbose(self.HEADER)
|
||||
|
||||
if self.args.test:
|
||||
self.verbose("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
|
||||
self.warning("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
|
||||
|
||||
################ create source zfsNode
|
||||
self.set_title("Source settings")
|
||||
|
||||
description = "[Source]"
|
||||
if self.args.no_thinning:
|
||||
source_thinner = None
|
||||
else:
|
||||
source_thinner = Thinner(self.args.keep_source)
|
||||
source_node = ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config,
|
||||
ssh_to=self.args.ssh_source, readonly=self.args.test,
|
||||
@ -313,8 +461,24 @@ class ZfsAutobackup:
|
||||
"'autobackup:{}=child')".format(
|
||||
self.args.backup_name, self.args.backup_name))
|
||||
|
||||
################# select source datasets
|
||||
self.set_title("Selecting")
|
||||
selected_source_datasets = source_node.selected_datasets
|
||||
|
||||
#Note: Before version v3.1-beta5, we always used exclude_received. This was a problem if you wanto to replicate an existing backup to another host and use the same backupname/snapshots.
|
||||
exclude_paths = []
|
||||
exclude_received=self.args.exclude_received
|
||||
if self.args.ssh_source == self.args.ssh_target:
|
||||
if self.args.target_path:
|
||||
# target and source are the same, make sure to exclude target_path
|
||||
self.warning("Source and target are on the same host, excluding target-path from selection.")
|
||||
exclude_paths.append(self.args.target_path)
|
||||
else:
|
||||
self.warning("Source and target are on the same host, excluding received datasets from selection.")
|
||||
exclude_received=True
|
||||
|
||||
|
||||
selected_source_datasets = source_node.selected_datasets(exclude_received=exclude_received,
|
||||
exclude_paths=exclude_paths)
|
||||
if not selected_source_datasets:
|
||||
self.error(
|
||||
"No source filesystems selected, please do a 'zfs set autobackup:{0}=true' on the source datasets "
|
||||
@ -322,34 +486,56 @@ class ZfsAutobackup:
|
||||
self.args.backup_name))
|
||||
return 255
|
||||
|
||||
source_datasets = []
|
||||
|
||||
# filter out already replicated stuff?
|
||||
if not self.args.ignore_replicated:
|
||||
source_datasets = selected_source_datasets
|
||||
else:
|
||||
self.set_title("Filtering already replicated filesystems")
|
||||
for selected_source_dataset in selected_source_datasets:
|
||||
if selected_source_dataset.is_changed(self.args.min_change):
|
||||
source_datasets.append(selected_source_dataset)
|
||||
else:
|
||||
selected_source_dataset.verbose("Ignoring, already replicated")
|
||||
source_datasets = self.filter_replicated(selected_source_datasets)
|
||||
|
||||
################# snapshotting
|
||||
if not self.args.no_snapshot:
|
||||
self.set_title("Snapshotting")
|
||||
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(),
|
||||
min_changed_bytes=self.args.min_change)
|
||||
|
||||
################# sync
|
||||
# if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode)
|
||||
if self.args.target_path:
|
||||
fail_count = self.sync_datasets(source_node, source_datasets)
|
||||
|
||||
# create target_node
|
||||
self.set_title("Target settings")
|
||||
if self.args.no_thinning:
|
||||
target_thinner = None
|
||||
else:
|
||||
target_thinner = Thinner(self.args.keep_target)
|
||||
target_node = ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config,
|
||||
ssh_to=self.args.ssh_target,
|
||||
readonly=self.args.test, debug_output=self.args.debug_output,
|
||||
description="[Target]",
|
||||
thinner=target_thinner)
|
||||
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
||||
|
||||
self.set_title("Synchronising")
|
||||
|
||||
# check if exists, to prevent vague errors
|
||||
target_dataset = ZfsDataset(target_node, self.args.target_path)
|
||||
if not target_dataset.exists:
|
||||
raise (Exception(
|
||||
"Target path '{}' does not exist. Please create this dataset first.".format(target_dataset)))
|
||||
|
||||
# do the actual sync
|
||||
# NOTE: even with no_send, no_thinning and no_snapshot it does a usefull thing because it checks if the common snapshots and shows incompatible snapshots
|
||||
fail_count = self.sync_datasets(
|
||||
source_node=source_node,
|
||||
source_datasets=source_datasets,
|
||||
target_node=target_node)
|
||||
|
||||
# no target specified, run in snapshot-only mode
|
||||
else:
|
||||
if not self.args.no_thinning:
|
||||
self.thin_source(source_datasets)
|
||||
fail_count = 0
|
||||
|
||||
if not fail_count:
|
||||
if self.args.test:
|
||||
self.set_title("All tests successfull.")
|
||||
self.set_title("All tests successful.")
|
||||
else:
|
||||
self.set_title("All operations completed successfully")
|
||||
if not self.args.target_path:
|
||||
@ -357,11 +543,11 @@ class ZfsAutobackup:
|
||||
|
||||
else:
|
||||
if fail_count != 255:
|
||||
self.error("{} failures!".format(fail_count))
|
||||
self.error("{} dataset(s) failed!".format(fail_count))
|
||||
|
||||
if self.args.test:
|
||||
self.verbose("")
|
||||
self.verbose("TEST MODE - DID NOT MAKE ANY CHANGES!")
|
||||
self.warning("TEST MODE - DID NOT MAKE ANY CHANGES!")
|
||||
|
||||
return fail_count
|
||||
|
||||
|
||||
@ -1,15 +1,14 @@
|
||||
import re
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
from zfs_autobackup.CachedProperty import CachedProperty
|
||||
from zfs_autobackup.ExecuteNode import ExecuteError
|
||||
|
||||
|
||||
class ZfsDataset:
|
||||
"""a zfs dataset (filesystem/volume/snapshot/clone)
|
||||
Note that a dataset doesn't have to actually exist (yet/anymore)
|
||||
Also most properties are cached for performance-reasons, but also to allow --test to function correctly.
|
||||
|
||||
"""a zfs dataset (filesystem/volume/snapshot/clone) Note that a dataset
|
||||
doesn't have to actually exist (yet/anymore) Also most properties are cached
|
||||
for performance-reasons, but also to allow --test to function correctly.
|
||||
"""
|
||||
|
||||
# illegal properties per dataset type. these will be removed from --set-properties and --filter-properties
|
||||
@ -19,8 +18,11 @@ class ZfsDataset:
|
||||
}
|
||||
|
||||
def __init__(self, zfs_node, name, force_exists=None):
|
||||
"""name: full path of the zfs dataset exists: specify if you already know a dataset exists or not. for
|
||||
performance and testing reasons. (otherwise it will have to check with zfs list when needed)
|
||||
"""
|
||||
Args:
|
||||
:type zfs_node: ZfsNode.ZfsNode
|
||||
:type name: str
|
||||
:type force_exists: bool
|
||||
"""
|
||||
self.zfs_node = zfs_node
|
||||
self.name = name # full name
|
||||
@ -41,12 +43,24 @@ class ZfsDataset:
|
||||
return self.name == obj.name
|
||||
|
||||
def verbose(self, txt):
|
||||
"""
|
||||
Args:
|
||||
:type txt: str
|
||||
"""
|
||||
self.zfs_node.verbose("{}: {}".format(self.name, txt))
|
||||
|
||||
def error(self, txt):
|
||||
"""
|
||||
Args:
|
||||
:type txt: str
|
||||
"""
|
||||
self.zfs_node.error("{}: {}".format(self.name, txt))
|
||||
|
||||
def debug(self, txt):
|
||||
"""
|
||||
Args:
|
||||
:type txt: str
|
||||
"""
|
||||
self.zfs_node.debug("{}: {}".format(self.name, txt))
|
||||
|
||||
def invalidate(self):
|
||||
@ -60,11 +74,19 @@ class ZfsDataset:
|
||||
return self.name.split("/")
|
||||
|
||||
def lstrip_path(self, count):
|
||||
"""return name with first count components stripped"""
|
||||
"""return name with first count components stripped
|
||||
|
||||
Args:
|
||||
:type count: int
|
||||
"""
|
||||
return "/".join(self.split_path()[count:])
|
||||
|
||||
def rstrip_path(self, count):
|
||||
"""return name with last count components stripped"""
|
||||
"""return name with last count components stripped
|
||||
|
||||
Args:
|
||||
:type count: int
|
||||
"""
|
||||
return "/".join(self.split_path()[:-count])
|
||||
|
||||
@property
|
||||
@ -90,10 +112,57 @@ class ZfsDataset:
|
||||
"""true if this dataset is a snapshot"""
|
||||
return self.name.find("@") != -1
|
||||
|
||||
def is_selected(self, value, source, inherited, exclude_received, exclude_paths):
|
||||
"""determine if dataset should be selected for backup (called from
|
||||
ZfsNode)
|
||||
|
||||
Args:
|
||||
:type exclude_paths: list of str
|
||||
:type value: str
|
||||
:type source: str
|
||||
:type inherited: bool
|
||||
:type exclude_received: bool
|
||||
"""
|
||||
|
||||
# sanity checks
|
||||
if source not in ["local", "received", "-"]:
|
||||
# probably a program error in zfs-autobackup or new feature in zfs
|
||||
raise (Exception(
|
||||
"{} autobackup-property has illegal source: '{}' (possible BUG)".format(self.name, source)))
|
||||
|
||||
if value not in ["false", "true", "child", "-"]:
|
||||
# user error
|
||||
raise (Exception(
|
||||
"{} autobackup-property has illegal value: '{}'".format(self.name, value)))
|
||||
|
||||
# our path starts with one of the excluded paths?
|
||||
for exclude_path in exclude_paths:
|
||||
if self.name.startswith(exclude_path):
|
||||
# too noisy for verbose
|
||||
self.debug("Excluded (in exclude list)")
|
||||
return False
|
||||
|
||||
# now determine if its actually selected
|
||||
if value == "false":
|
||||
self.verbose("Excluded (disabled)")
|
||||
return False
|
||||
elif value == "true" or (value == "child" and inherited):
|
||||
if source == "local":
|
||||
self.verbose("Selected")
|
||||
return True
|
||||
elif source == "received":
|
||||
if exclude_received:
|
||||
self.verbose("Excluded (dataset already received)")
|
||||
return False
|
||||
else:
|
||||
self.verbose("Selected")
|
||||
return True
|
||||
|
||||
@CachedProperty
|
||||
def parent(self):
|
||||
"""get zfs-parent of this dataset. for snapshots this means it will get the filesystem/volume that it belongs
|
||||
to. otherwise it will return the parent according to path
|
||||
"""get zfs-parent of this dataset. for snapshots this means it will get
|
||||
the filesystem/volume that it belongs to. otherwise it will return the
|
||||
parent according to path
|
||||
|
||||
we cache this so everything in the parent that is cached also stays.
|
||||
"""
|
||||
@ -102,39 +171,51 @@ class ZfsDataset:
|
||||
else:
|
||||
return ZfsDataset(self.zfs_node, self.rstrip_path(1))
|
||||
|
||||
def find_prev_snapshot(self, snapshot, other_snapshots=False):
|
||||
"""find previous snapshot in this dataset. None if it doesn't exist.
|
||||
# NOTE: unused for now
|
||||
# def find_prev_snapshot(self, snapshot, also_other_snapshots=False):
|
||||
# """find previous snapshot in this dataset. None if it doesn't exist.
|
||||
#
|
||||
# also_other_snapshots: set to true to also return snapshots that where
|
||||
# not created by us. (is_ours)
|
||||
#
|
||||
# Args:
|
||||
# :type snapshot: str or ZfsDataset.ZfsDataset
|
||||
# :type also_other_snapshots: bool
|
||||
# """
|
||||
#
|
||||
# if self.is_snapshot:
|
||||
# raise (Exception("Please call this on a dataset."))
|
||||
#
|
||||
# index = self.find_snapshot_index(snapshot)
|
||||
# while index:
|
||||
# index = index - 1
|
||||
# if also_other_snapshots or self.snapshots[index].is_ours():
|
||||
# return self.snapshots[index]
|
||||
# return None
|
||||
|
||||
other_snapshots: set to true to also return snapshots that where not created by us. (is_ours)
|
||||
def find_next_snapshot(self, snapshot, also_other_snapshots=False):
|
||||
"""find next snapshot in this dataset. None if it doesn't exist
|
||||
|
||||
Args:
|
||||
:type snapshot: ZfsDataset
|
||||
:type also_other_snapshots: bool
|
||||
"""
|
||||
|
||||
if self.is_snapshot:
|
||||
raise (Exception("Please call this on a dataset."))
|
||||
|
||||
index = self.find_snapshot_index(snapshot)
|
||||
while index:
|
||||
index = index - 1
|
||||
if other_snapshots or self.snapshots[index].is_ours():
|
||||
return self.snapshots[index]
|
||||
return None
|
||||
|
||||
def find_next_snapshot(self, snapshot, other_snapshots=False):
|
||||
"""find next snapshot in this dataset. None if it doesn't exist"""
|
||||
|
||||
if self.is_snapshot:
|
||||
raise (Exception("Please call this on a dataset."))
|
||||
|
||||
index = self.find_snapshot_index(snapshot)
|
||||
while index is not None and index < len(self.snapshots) - 1:
|
||||
index = index + 1
|
||||
if other_snapshots or self.snapshots[index].is_ours():
|
||||
if also_other_snapshots or self.snapshots[index].is_ours():
|
||||
return self.snapshots[index]
|
||||
return None
|
||||
|
||||
@CachedProperty
|
||||
def exists(self):
|
||||
"""check if dataset exists.
|
||||
Use force to force a specific value to be cached, if you already know. Useful for performance reasons"""
|
||||
"""check if dataset exists. Use force to force a specific value to be
|
||||
cached, if you already know. Useful for performance reasons
|
||||
"""
|
||||
|
||||
if self.force_exists is not None:
|
||||
self.debug("Checking if filesystem exists: was forced to {}".format(self.force_exists))
|
||||
@ -146,7 +227,11 @@ class ZfsDataset:
|
||||
hide_errors=True) and True)
|
||||
|
||||
def create_filesystem(self, parents=False):
|
||||
"""create a filesystem"""
|
||||
"""create a filesystem
|
||||
|
||||
Args:
|
||||
:type parents: bool
|
||||
"""
|
||||
if parents:
|
||||
self.verbose("Creating filesystem and parents")
|
||||
self.zfs_node.run(["zfs", "create", "-p", self.name])
|
||||
@ -157,7 +242,12 @@ class ZfsDataset:
|
||||
self.force_exists = True
|
||||
|
||||
def destroy(self, fail_exception=False):
|
||||
"""destroy the dataset. by default failures are not an exception, so we can continue making backups"""
|
||||
"""destroy the dataset. by default failures are not an exception, so we
|
||||
can continue making backups
|
||||
|
||||
Args:
|
||||
:type fail_exception: bool
|
||||
"""
|
||||
|
||||
self.verbose("Destroying")
|
||||
|
||||
@ -169,7 +259,7 @@ class ZfsDataset:
|
||||
self.invalidate()
|
||||
self.force_exists = False
|
||||
return True
|
||||
except subprocess.CalledProcessError:
|
||||
except ExecuteError:
|
||||
if not fail_exception:
|
||||
return False
|
||||
else:
|
||||
@ -196,7 +286,11 @@ class ZfsDataset:
|
||||
return ret
|
||||
|
||||
def is_changed(self, min_changed_bytes=1):
|
||||
"""dataset is changed since ANY latest snapshot ?"""
|
||||
"""dataset is changed since ANY latest snapshot ?
|
||||
|
||||
Args:
|
||||
:type min_changed_bytes: int
|
||||
"""
|
||||
self.debug("Checking if dataset is changed")
|
||||
|
||||
if min_changed_bytes == 0:
|
||||
@ -243,7 +337,9 @@ class ZfsDataset:
|
||||
|
||||
@property
|
||||
def timestamp(self):
|
||||
"""get timestamp from snapshot name. Only works for our own snapshots with the correct format."""
|
||||
"""get timestamp from snapshot name. Only works for our own snapshots
|
||||
with the correct format.
|
||||
"""
|
||||
time_str = re.findall("^.*-([0-9]*)$", self.snapshot_name)[0]
|
||||
if len(time_str) != 14:
|
||||
raise (Exception("Snapshot has invalid timestamp in name: {}".format(self.snapshot_name)))
|
||||
@ -253,7 +349,11 @@ class ZfsDataset:
|
||||
return time_secs
|
||||
|
||||
def from_names(self, names):
|
||||
"""convert a list of names to a list ZfsDatasets for this zfs_node"""
|
||||
"""convert a list of names to a list ZfsDatasets for this zfs_node
|
||||
|
||||
Args:
|
||||
:type names: list of str
|
||||
"""
|
||||
ret = []
|
||||
for name in names:
|
||||
ret.append(ZfsDataset(self.zfs_node, name))
|
||||
@ -277,7 +377,6 @@ class ZfsDataset:
|
||||
def snapshots(self):
|
||||
"""get all snapshots of this dataset"""
|
||||
|
||||
|
||||
if not self.exists:
|
||||
return []
|
||||
|
||||
@ -300,7 +399,13 @@ class ZfsDataset:
|
||||
return ret
|
||||
|
||||
def find_snapshot(self, snapshot):
|
||||
"""find snapshot by snapshot (can be a snapshot_name or a different ZfsDataset )"""
|
||||
"""find snapshot by snapshot (can be a snapshot_name or a different
|
||||
ZfsDataset )
|
||||
|
||||
Args:
|
||||
:rtype: ZfsDataset
|
||||
:type snapshot: str or ZfsDataset
|
||||
"""
|
||||
|
||||
if not isinstance(snapshot, ZfsDataset):
|
||||
snapshot_name = snapshot
|
||||
@ -314,7 +419,12 @@ class ZfsDataset:
|
||||
return None
|
||||
|
||||
def find_snapshot_index(self, snapshot):
|
||||
"""find snapshot index by snapshot (can be a snapshot_name or ZfsDataset)"""
|
||||
"""find snapshot index by snapshot (can be a snapshot_name or
|
||||
ZfsDataset)
|
||||
|
||||
Args:
|
||||
:type snapshot: str or ZfsDataset
|
||||
"""
|
||||
|
||||
if not isinstance(snapshot, ZfsDataset):
|
||||
snapshot_name = snapshot
|
||||
@ -343,7 +453,11 @@ class ZfsDataset:
|
||||
return int(output[0])
|
||||
|
||||
def is_changed_ours(self, min_changed_bytes=1):
|
||||
"""dataset is changed since OUR latest snapshot?"""
|
||||
"""dataset is changed since OUR latest snapshot?
|
||||
|
||||
Args:
|
||||
:type min_changed_bytes: int
|
||||
"""
|
||||
|
||||
if min_changed_bytes == 0:
|
||||
return True
|
||||
@ -359,7 +473,11 @@ class ZfsDataset:
|
||||
|
||||
@CachedProperty
|
||||
def recursive_datasets(self, types="filesystem,volume"):
|
||||
"""get all (non-snapshot) datasets recursively under us"""
|
||||
"""get all (non-snapshot) datasets recursively under us
|
||||
|
||||
Args:
|
||||
:type types: str
|
||||
"""
|
||||
|
||||
self.debug("Getting all recursive datasets under us")
|
||||
|
||||
@ -371,7 +489,11 @@ class ZfsDataset:
|
||||
|
||||
@CachedProperty
|
||||
def datasets(self, types="filesystem,volume"):
|
||||
"""get all (non-snapshot) datasets directly under us"""
|
||||
"""get all (non-snapshot) datasets directly under us
|
||||
|
||||
Args:
|
||||
:type types: str
|
||||
"""
|
||||
|
||||
self.debug("Getting all datasets under us")
|
||||
|
||||
@ -381,11 +503,20 @@ class ZfsDataset:
|
||||
|
||||
return self.from_names(names[1:])
|
||||
|
||||
def send_pipe(self, features, prev_snapshot=None, resume_token=None, show_progress=False, raw=False):
|
||||
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes):
|
||||
"""returns a pipe with zfs send output for this snapshot
|
||||
|
||||
resume_token: resume sending from this token. (in that case we don't need to know snapshot names)
|
||||
resume_token: resume sending from this token. (in that case we don't
|
||||
need to know snapshot names)
|
||||
|
||||
Args:
|
||||
:param send_pipes: output cmd array that will be added to actual zfs send command. (e.g. mbuffer or compression program)
|
||||
:type send_pipes: list of str
|
||||
:type features: list of str
|
||||
:type prev_snapshot: ZfsDataset
|
||||
:type resume_token: str
|
||||
:type show_progress: bool
|
||||
:type raw: bool
|
||||
"""
|
||||
# build source command
|
||||
cmd = []
|
||||
@ -394,28 +525,22 @@ class ZfsDataset:
|
||||
|
||||
# all kind of performance options:
|
||||
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
|
||||
cmd.append("-L") # large block support (only if recordsize>128k which is seldomly used)
|
||||
cmd.append("--large-block") # large block support (only if recordsize>128k which is seldomly used)
|
||||
|
||||
if 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
|
||||
cmd.append("-e") # WRITE_EMBEDDED, more compact stream
|
||||
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
|
||||
cmd.append("--embed") # WRITE_EMBEDDED, more compact stream
|
||||
|
||||
if "-c" in self.zfs_node.supported_send_options:
|
||||
cmd.append("-c") # use compressed WRITE records
|
||||
cmd.append("--compressed") # use compressed WRITE records
|
||||
|
||||
# NOTE: performance is usually worse with this option, according to manual
|
||||
# also -D will be depricated in newer ZFS versions
|
||||
# if not resume:
|
||||
# if "-D" in self.zfs_node.supported_send_options:
|
||||
# cmd.append("-D") # dedupped stream, sends less duplicate data
|
||||
|
||||
# raw? (for encryption)
|
||||
# raw? (send over encrypted data in its original encrypted form without decrypting)
|
||||
if raw:
|
||||
cmd.append("--raw")
|
||||
|
||||
# progress output
|
||||
if show_progress:
|
||||
cmd.append("-v")
|
||||
cmd.append("-P")
|
||||
cmd.append("--verbose")
|
||||
cmd.append("--parsable")
|
||||
|
||||
# resume a previous send? (don't need more parameters in that case)
|
||||
if resume_token:
|
||||
@ -423,7 +548,8 @@ class ZfsDataset:
|
||||
|
||||
else:
|
||||
# send properties
|
||||
cmd.append("-p")
|
||||
if send_properties:
|
||||
cmd.append("--props")
|
||||
|
||||
# incremental?
|
||||
if prev_snapshot:
|
||||
@ -431,17 +557,26 @@ class ZfsDataset:
|
||||
|
||||
cmd.append(self.name)
|
||||
|
||||
# if args.buffer and args.ssh_source!="local":
|
||||
# cmd.append("|mbuffer -m {}".format(args.buffer))
|
||||
cmd.extend(send_pipes)
|
||||
|
||||
# NOTE: this doesn't start the send yet, it only returns a subprocess.Pipe
|
||||
return self.zfs_node.run(cmd, pipe=True)
|
||||
output_pipe = self.zfs_node.run(cmd, pipe=True, readonly=True)
|
||||
|
||||
def recv_pipe(self, pipe, features, filter_properties=None, set_properties=None, ignore_exit_code=False):
|
||||
return output_pipe
|
||||
|
||||
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False):
|
||||
"""starts a zfs recv for this snapshot and uses pipe as input
|
||||
|
||||
note: you can it both on a snapshot or filesystem object.
|
||||
The resulting zfs command is the same, only our object cache is invalidated differently.
|
||||
note: you can it both on a snapshot or filesystem object. The
|
||||
resulting zfs command is the same, only our object cache is invalidated
|
||||
differently.
|
||||
|
||||
Args:
|
||||
:param recv_pipes: input cmd array that will be prepended to actual zfs recv command. (e.g. mbuffer or decompression program)
|
||||
:type pipe: subprocess.pOpen
|
||||
:type features: list of str
|
||||
:type filter_properties: list of str
|
||||
:type set_properties: list of str
|
||||
:type ignore_exit_code: bool
|
||||
"""
|
||||
|
||||
if set_properties is None:
|
||||
@ -453,6 +588,8 @@ class ZfsDataset:
|
||||
# build target command
|
||||
cmd = []
|
||||
|
||||
cmd.extend(recv_pipes)
|
||||
|
||||
cmd.extend(["zfs", "recv"])
|
||||
|
||||
# don't mount filesystem that is received
|
||||
@ -495,15 +632,26 @@ class ZfsDataset:
|
||||
self.error("error during transfer")
|
||||
raise (Exception("Target doesn't exist after transfer, something went wrong."))
|
||||
|
||||
# if args.buffer and args.ssh_target!="local":
|
||||
# cmd.append("|mbuffer -m {}".format(args.buffer))
|
||||
|
||||
def transfer_snapshot(self, target_snapshot, features, prev_snapshot=None, show_progress=False,
|
||||
filter_properties=None, set_properties=None, ignore_recv_exit_code=False, resume_token=None,
|
||||
raw=False):
|
||||
"""transfer this snapshot to target_snapshot. specify prev_snapshot for incremental transfer
|
||||
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
|
||||
filter_properties, set_properties, ignore_recv_exit_code, resume_token,
|
||||
raw, send_properties, write_embedded, send_pipes, recv_pipes):
|
||||
"""transfer this snapshot to target_snapshot. specify prev_snapshot for
|
||||
incremental transfer
|
||||
|
||||
connects a send_pipe() to recv_pipe()
|
||||
|
||||
Args:
|
||||
:type send_pipes: list of str
|
||||
:type recv_pipes: list of str
|
||||
:type target_snapshot: ZfsDataset
|
||||
:type features: list of str
|
||||
:type prev_snapshot: ZfsDataset
|
||||
:type show_progress: bool
|
||||
:type filter_properties: list of str
|
||||
:type set_properties: list of str
|
||||
:type ignore_recv_exit_code: bool
|
||||
:type resume_token: str
|
||||
:type raw: bool
|
||||
"""
|
||||
|
||||
if set_properties is None:
|
||||
@ -525,9 +673,9 @@ class ZfsDataset:
|
||||
|
||||
# do it
|
||||
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
|
||||
resume_token=resume_token, raw=raw)
|
||||
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes)
|
||||
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
|
||||
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
|
||||
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes)
|
||||
|
||||
def abort_resume(self):
|
||||
"""abort current resume state"""
|
||||
@ -543,7 +691,12 @@ class ZfsDataset:
|
||||
return
|
||||
|
||||
def get_resume_snapshot(self, resume_token):
|
||||
"""returns snapshot that will be resumed by this resume token (run this on source with target-token)"""
|
||||
"""returns snapshot that will be resumed by this resume token (run this
|
||||
on source with target-token)
|
||||
|
||||
Args:
|
||||
:type resume_token: str
|
||||
"""
|
||||
# use zfs send -n option to determine this
|
||||
# NOTE: on smartos stderr, on linux stdout
|
||||
(stdout, stderr) = self.zfs_node.run(["zfs", "send", "-t", resume_token, "-n", "-v"], valid_exitcodes=[0, 255],
|
||||
@ -563,11 +716,16 @@ class ZfsDataset:
|
||||
return None
|
||||
|
||||
def thin_list(self, keeps=None, ignores=None):
|
||||
"""determines list of snapshots that should be kept or deleted based on the thinning schedule. cull the herd!
|
||||
keep: list of snapshots to always keep (usually the last) ignores: snapshots to completely ignore (usually
|
||||
incompatible target snapshots that are going to be destroyed anyway)
|
||||
"""determines list of snapshots that should be kept or deleted based on
|
||||
the thinning schedule. cull the herd!
|
||||
|
||||
returns: ( keeps, obsoletes )
|
||||
|
||||
Args:
|
||||
:param keeps: list of snapshots to always keep (usually the last)
|
||||
:param ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
|
||||
:type keeps: list of ZfsDataset
|
||||
:type ignores: list of ZfsDataset
|
||||
"""
|
||||
|
||||
if ignores is None:
|
||||
@ -577,10 +735,14 @@ class ZfsDataset:
|
||||
|
||||
snapshots = [snapshot for snapshot in self.our_snapshots if snapshot not in ignores]
|
||||
|
||||
return self.zfs_node.thinner.thin(snapshots, keep_objects=keeps)
|
||||
return self.zfs_node.thin(snapshots, keep_objects=keeps)
|
||||
|
||||
def thin(self, skip_holds=False):
|
||||
"""destroys snapshots according to thin_list, except last snapshot"""
|
||||
"""destroys snapshots according to thin_list, except last snapshot
|
||||
|
||||
Args:
|
||||
:type skip_holds: bool
|
||||
"""
|
||||
|
||||
(keeps, obsoletes) = self.thin_list(keeps=self.our_snapshots[-1:])
|
||||
for obsolete in obsoletes:
|
||||
@ -591,8 +753,11 @@ class ZfsDataset:
|
||||
self.snapshots.remove(obsolete)
|
||||
|
||||
def find_common_snapshot(self, target_dataset):
|
||||
"""find latest common snapshot between us and target
|
||||
returns None if its an initial transfer
|
||||
"""find latest common snapshot between us and target returns None if its
|
||||
an initial transfer
|
||||
|
||||
Args:
|
||||
:type target_dataset: ZfsDataset
|
||||
"""
|
||||
if not target_dataset.snapshots:
|
||||
# target has nothing yet
|
||||
@ -609,27 +774,39 @@ class ZfsDataset:
|
||||
target_dataset.error("Cant find common snapshot with source.")
|
||||
raise (Exception("You probably need to delete the target dataset to fix this."))
|
||||
|
||||
def find_start_snapshot(self, common_snapshot, other_snapshots):
|
||||
"""finds first snapshot to send"""
|
||||
def find_start_snapshot(self, common_snapshot, also_other_snapshots):
|
||||
"""finds first snapshot to send :rtype: ZfsDataset or None if we cant
|
||||
find it.
|
||||
|
||||
Args:
|
||||
:type common_snapshot: ZfsDataset
|
||||
:type also_other_snapshots: bool
|
||||
"""
|
||||
|
||||
if not common_snapshot:
|
||||
if not self.snapshots:
|
||||
start_snapshot = None
|
||||
else:
|
||||
# start from beginning
|
||||
# no common snapshot, start from beginning
|
||||
start_snapshot = self.snapshots[0]
|
||||
|
||||
if not start_snapshot.is_ours() and not other_snapshots:
|
||||
if not start_snapshot.is_ours() and not also_other_snapshots:
|
||||
# try to start at a snapshot thats ours
|
||||
start_snapshot = self.find_next_snapshot(start_snapshot, other_snapshots)
|
||||
start_snapshot = self.find_next_snapshot(start_snapshot, also_other_snapshots)
|
||||
else:
|
||||
start_snapshot = self.find_next_snapshot(common_snapshot, other_snapshots)
|
||||
# normal situation: start_snapshot is the one after the common snapshot
|
||||
start_snapshot = self.find_next_snapshot(common_snapshot, also_other_snapshots)
|
||||
|
||||
return start_snapshot
|
||||
|
||||
def find_incompatible_snapshots(self, common_snapshot):
|
||||
"""returns a list of snapshots that is incompatible for a zfs recv onto the common_snapshot.
|
||||
all direct followup snapshots with written=0 are compatible."""
|
||||
"""returns a list of snapshots that is incompatible for a zfs recv onto
|
||||
the common_snapshot. all direct followup snapshots with written=0 are
|
||||
compatible.
|
||||
|
||||
Args:
|
||||
:type common_snapshot: ZfsDataset
|
||||
"""
|
||||
|
||||
ret = []
|
||||
|
||||
@ -643,7 +820,12 @@ class ZfsDataset:
|
||||
return ret
|
||||
|
||||
def get_allowed_properties(self, filter_properties, set_properties):
|
||||
"""only returns lists of allowed properties for this dataset type"""
|
||||
"""only returns lists of allowed properties for this dataset type
|
||||
|
||||
Args:
|
||||
:type filter_properties: list of str
|
||||
:type set_properties: list of str
|
||||
"""
|
||||
|
||||
allowed_filter_properties = []
|
||||
allowed_set_properties = []
|
||||
@ -659,50 +841,40 @@ class ZfsDataset:
|
||||
|
||||
return allowed_filter_properties, allowed_set_properties
|
||||
|
||||
def sync_snapshots(self, target_dataset, features, show_progress=False, filter_properties=None, set_properties=None,
|
||||
ignore_recv_exit_code=False, holds=True, rollback=False, raw=False, other_snapshots=False,
|
||||
no_send=False, destroy_incompatible=False):
|
||||
"""sync this dataset's snapshots to target_dataset, while also thinning out old snapshots along the way."""
|
||||
def _add_virtual_snapshots(self, source_dataset, source_start_snapshot, also_other_snapshots):
|
||||
"""add snapshots from source to our snapshot list. (just the in memory
|
||||
list, no disk operations)
|
||||
|
||||
if set_properties is None:
|
||||
set_properties = []
|
||||
if filter_properties is None:
|
||||
filter_properties = []
|
||||
Args:
|
||||
:type source_dataset: ZfsDataset
|
||||
:type source_start_snapshot: ZfsDataset
|
||||
:type also_other_snapshots: bool
|
||||
"""
|
||||
|
||||
# determine common and start snapshot
|
||||
target_dataset.debug("Determining start snapshot")
|
||||
common_snapshot = self.find_common_snapshot(target_dataset)
|
||||
start_snapshot = self.find_start_snapshot(common_snapshot, other_snapshots)
|
||||
# should be destroyed before attempting zfs recv:
|
||||
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot)
|
||||
|
||||
# make target snapshot list the same as source, by adding virtual non-existing ones to the list.
|
||||
target_dataset.debug("Creating virtual target snapshots")
|
||||
source_snapshot = start_snapshot
|
||||
while source_snapshot:
|
||||
# create virtual target snapshot
|
||||
virtual_snapshot = ZfsDataset(target_dataset.zfs_node,
|
||||
target_dataset.filesystem_name + "@" + source_snapshot.snapshot_name,
|
||||
self.debug("Creating virtual target snapshots")
|
||||
snapshot = source_start_snapshot
|
||||
while snapshot:
|
||||
# create virtual target snapsho
|
||||
# NOTE: with force_exist we're telling the dataset it doesnt exist yet. (e.g. its virtual)
|
||||
virtual_snapshot = ZfsDataset(self.zfs_node,
|
||||
self.filesystem_name + "@" + snapshot.snapshot_name,
|
||||
force_exists=False)
|
||||
target_dataset.snapshots.append(virtual_snapshot)
|
||||
source_snapshot = self.find_next_snapshot(source_snapshot, other_snapshots)
|
||||
self.snapshots.append(virtual_snapshot)
|
||||
snapshot = source_dataset.find_next_snapshot(snapshot, also_other_snapshots)
|
||||
|
||||
# now let thinner decide what we want on both sides as final state (after all transfers are done)
|
||||
if self.our_snapshots:
|
||||
self.debug("Create thinning list")
|
||||
(source_keeps, source_obsoletes) = self.thin_list(keeps=[self.our_snapshots[-1]])
|
||||
else:
|
||||
source_obsoletes = []
|
||||
def _pre_clean(self, common_snapshot, target_dataset, source_obsoletes, target_obsoletes, target_keeps):
|
||||
"""cleanup old stuff before starting snapshot syncing
|
||||
|
||||
if target_dataset.our_snapshots:
|
||||
(target_keeps, target_obsoletes) = target_dataset.thin_list(keeps=[target_dataset.our_snapshots[-1]],
|
||||
ignores=incompatible_target_snapshots)
|
||||
else:
|
||||
target_keeps = []
|
||||
target_obsoletes = []
|
||||
Args:
|
||||
:type common_snapshot: ZfsDataset
|
||||
:type target_dataset: ZfsDataset
|
||||
:type source_obsoletes: list of ZfsDataset
|
||||
:type target_obsoletes: list of ZfsDataset
|
||||
:type target_keeps: list of ZfsDataset
|
||||
"""
|
||||
|
||||
# on source: destroy all obsoletes before common. but after common, only delete snapshots that target also
|
||||
# doesn't want to explicitly keep
|
||||
# on source: destroy all obsoletes before common.
|
||||
# But after common, only delete snapshots that target also doesn't want
|
||||
before_common = True
|
||||
for source_snapshot in self.snapshots:
|
||||
if common_snapshot and source_snapshot.snapshot_name == common_snapshot.snapshot_name:
|
||||
@ -720,12 +892,14 @@ class ZfsDataset:
|
||||
if target_snapshot.exists:
|
||||
target_snapshot.destroy()
|
||||
|
||||
# now actually transfer the snapshots, if we want
|
||||
if no_send:
|
||||
return
|
||||
def _validate_resume_token(self, target_dataset, start_snapshot):
|
||||
"""validate and get (or destory) resume token
|
||||
|
||||
Args:
|
||||
:type target_dataset: ZfsDataset
|
||||
:type start_snapshot: ZfsDataset
|
||||
"""
|
||||
|
||||
# resume?
|
||||
resume_token = None
|
||||
if 'receive_resume_token' in target_dataset.properties:
|
||||
resume_token = target_dataset.properties['receive_resume_token']
|
||||
# not valid anymore?
|
||||
@ -733,9 +907,48 @@ class ZfsDataset:
|
||||
if not resume_snapshot or start_snapshot.snapshot_name != resume_snapshot.snapshot_name:
|
||||
target_dataset.verbose("Cant resume, resume token no longer valid.")
|
||||
target_dataset.abort_resume()
|
||||
resume_token = None
|
||||
else:
|
||||
return resume_token
|
||||
|
||||
def _plan_sync(self, target_dataset, also_other_snapshots):
|
||||
"""plan where to start syncing and what to sync and what to keep
|
||||
|
||||
Args:
|
||||
:rtype: ( ZfsDataset, ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset )
|
||||
:type target_dataset: ZfsDataset
|
||||
:type also_other_snapshots: bool
|
||||
"""
|
||||
|
||||
# determine common and start snapshot
|
||||
target_dataset.debug("Determining start snapshot")
|
||||
common_snapshot = self.find_common_snapshot(target_dataset)
|
||||
start_snapshot = self.find_start_snapshot(common_snapshot, also_other_snapshots)
|
||||
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot)
|
||||
|
||||
# let thinner decide whats obsolete on source
|
||||
source_obsoletes = []
|
||||
if self.our_snapshots:
|
||||
source_obsoletes = self.thin_list(keeps=[self.our_snapshots[-1]])[1]
|
||||
|
||||
# let thinner decide keeps/obsoletes on target, AFTER the transfer would be done (by using virtual snapshots)
|
||||
target_dataset._add_virtual_snapshots(self, start_snapshot, also_other_snapshots)
|
||||
target_keeps = []
|
||||
target_obsoletes = []
|
||||
if target_dataset.our_snapshots:
|
||||
(target_keeps, target_obsoletes) = target_dataset.thin_list(keeps=[target_dataset.our_snapshots[-1]],
|
||||
ignores=incompatible_target_snapshots)
|
||||
|
||||
return common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps, incompatible_target_snapshots
|
||||
|
||||
def handle_incompatible_snapshots(self, incompatible_target_snapshots, destroy_incompatible):
|
||||
"""destroy incompatbile snapshots on target before sync, or inform user
|
||||
what to do
|
||||
|
||||
Args:
|
||||
:type incompatible_target_snapshots: list of ZfsDataset
|
||||
:type destroy_incompatible: bool
|
||||
"""
|
||||
|
||||
# incompatible target snapshots?
|
||||
if incompatible_target_snapshots:
|
||||
if not destroy_incompatible:
|
||||
for snapshot in incompatible_target_snapshots:
|
||||
@ -745,12 +958,80 @@ class ZfsDataset:
|
||||
for snapshot in incompatible_target_snapshots:
|
||||
snapshot.verbose("Incompatible snapshot")
|
||||
snapshot.destroy()
|
||||
target_dataset.snapshots.remove(snapshot)
|
||||
self.snapshots.remove(snapshot)
|
||||
|
||||
|
||||
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
|
||||
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
|
||||
no_send, destroy_incompatible, send_pipes, recv_pipes):
|
||||
"""sync this dataset's snapshots to target_dataset, while also thinning
|
||||
out old snapshots along the way.
|
||||
|
||||
Args:
|
||||
:type send_pipes: list of str
|
||||
:type recv_pipes: list of str
|
||||
:type target_dataset: ZfsDataset
|
||||
:type features: list of str
|
||||
:type show_progress: bool
|
||||
:type filter_properties: list of str
|
||||
:type set_properties: list of str
|
||||
:type ignore_recv_exit_code: bool
|
||||
:type holds: bool
|
||||
:type rollback: bool
|
||||
:type decrypt: bool
|
||||
:type also_other_snapshots: bool
|
||||
:type no_send: bool
|
||||
:type destroy_incompatible: bool
|
||||
"""
|
||||
|
||||
(common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps,
|
||||
incompatible_target_snapshots) = \
|
||||
self._plan_sync(target_dataset=target_dataset, also_other_snapshots=also_other_snapshots)
|
||||
|
||||
# NOTE: we do this because we dont want filesystems to fillup when backups keep failing.
|
||||
# Also usefull with no_send to still cleanup stuff.
|
||||
self._pre_clean(
|
||||
common_snapshot=common_snapshot, target_dataset=target_dataset,
|
||||
target_keeps=target_keeps, target_obsoletes=target_obsoletes, source_obsoletes=source_obsoletes)
|
||||
|
||||
# handle incompatible stuff on target
|
||||
target_dataset.handle_incompatible_snapshots(incompatible_target_snapshots, destroy_incompatible)
|
||||
|
||||
# now actually transfer the snapshots, if we want
|
||||
if no_send:
|
||||
return
|
||||
|
||||
# check if we can resume
|
||||
resume_token = self._validate_resume_token(target_dataset, start_snapshot)
|
||||
|
||||
# rollback target to latest?
|
||||
if rollback:
|
||||
target_dataset.rollback()
|
||||
|
||||
#defaults for these settings if there is no encryption stuff going on:
|
||||
send_properties = True
|
||||
raw = False
|
||||
write_embedded = True
|
||||
|
||||
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties, set_properties)
|
||||
|
||||
# source dataset encrypted?
|
||||
if self.properties.get('encryption', 'off')!='off':
|
||||
# user wants to send it over decrypted?
|
||||
if decrypt:
|
||||
# when decrypting, zfs cant send properties
|
||||
send_properties=False
|
||||
else:
|
||||
# keep data encrypted by sending it raw (including properties)
|
||||
raw=True
|
||||
|
||||
# encrypt at target?
|
||||
if encrypt and not raw:
|
||||
# filter out encryption properties to let encryption on the target take place
|
||||
active_filter_properties.extend(["keylocation","pbkdf2iters","keyformat", "encryption"])
|
||||
write_embedded=False
|
||||
|
||||
|
||||
# now actually transfer the snapshots
|
||||
prev_source_snapshot = common_snapshot
|
||||
source_snapshot = start_snapshot
|
||||
@ -759,15 +1040,14 @@ class ZfsDataset:
|
||||
|
||||
# does target actually want it?
|
||||
if target_snapshot not in target_obsoletes:
|
||||
# NOTE: should we let transfer_snapshot handle this?
|
||||
(allowed_filter_properties, allowed_set_properties) = self.get_allowed_properties(filter_properties,
|
||||
set_properties)
|
||||
|
||||
source_snapshot.transfer_snapshot(target_snapshot, features=features,
|
||||
prev_snapshot=prev_source_snapshot, show_progress=show_progress,
|
||||
filter_properties=allowed_filter_properties,
|
||||
set_properties=allowed_set_properties,
|
||||
filter_properties=active_filter_properties,
|
||||
set_properties=active_set_properties,
|
||||
ignore_recv_exit_code=ignore_recv_exit_code,
|
||||
resume_token=resume_token, raw=raw)
|
||||
resume_token=resume_token, write_embedded=write_embedded, raw=raw, send_properties=send_properties, send_pipes=send_pipes, recv_pipes=recv_pipes)
|
||||
|
||||
resume_token = None
|
||||
|
||||
# hold the new common snapshots and release the previous ones
|
||||
@ -799,4 +1079,4 @@ class ZfsDataset:
|
||||
target_dataset.abort_resume()
|
||||
resume_token = None
|
||||
|
||||
source_snapshot = self.find_next_snapshot(source_snapshot, other_snapshots)
|
||||
source_snapshot = self.find_next_snapshot(source_snapshot, also_other_snapshots)
|
||||
|
||||
@ -10,17 +10,15 @@ from zfs_autobackup.Thinner import Thinner
|
||||
from zfs_autobackup.CachedProperty import CachedProperty
|
||||
from zfs_autobackup.ZfsPool import ZfsPool
|
||||
from zfs_autobackup.ZfsDataset import ZfsDataset
|
||||
from zfs_autobackup.ExecuteNode import ExecuteError
|
||||
|
||||
|
||||
class ZfsNode(ExecuteNode):
|
||||
"""a node that contains zfs datasets. implements global (systemwide/pool wide) zfs commands"""
|
||||
|
||||
def __init__(self, backup_name, logger, ssh_config=None, ssh_to=None, readonly=False, description="",
|
||||
debug_output=False, thinner=Thinner()):
|
||||
debug_output=False, thinner=None):
|
||||
self.backup_name = backup_name
|
||||
if not description and ssh_to:
|
||||
self.description = ssh_to
|
||||
else:
|
||||
self.description = description
|
||||
|
||||
self.logger = logger
|
||||
@ -33,6 +31,7 @@ class ZfsNode(ExecuteNode):
|
||||
else:
|
||||
self.verbose("Datasets are local")
|
||||
|
||||
if thinner is not None:
|
||||
rules = thinner.human_rules()
|
||||
if rules:
|
||||
for rule in rules:
|
||||
@ -40,7 +39,7 @@ class ZfsNode(ExecuteNode):
|
||||
else:
|
||||
self.verbose("Keep no old snaphots")
|
||||
|
||||
self.thinner = thinner
|
||||
self.__thinner = thinner
|
||||
|
||||
# list of ZfsPools
|
||||
self.__pools = {}
|
||||
@ -50,6 +49,12 @@ class ZfsNode(ExecuteNode):
|
||||
|
||||
ExecuteNode.__init__(self, ssh_config=ssh_config, ssh_to=ssh_to, readonly=readonly, debug_output=debug_output)
|
||||
|
||||
def thin(self, objects, keep_objects):
|
||||
if self.__thinner is not None:
|
||||
return self.__thinner.thin(objects, keep_objects)
|
||||
else:
|
||||
return ( keep_objects, [] )
|
||||
|
||||
@CachedProperty
|
||||
def supported_send_options(self):
|
||||
"""list of supported options, for optimizing sends"""
|
||||
@ -77,7 +82,7 @@ class ZfsNode(ExecuteNode):
|
||||
|
||||
try:
|
||||
self.run(cmd, hide_errors=True, valid_exitcodes=[0, 1])
|
||||
except subprocess.CalledProcessError:
|
||||
except ExecuteError:
|
||||
return False
|
||||
|
||||
return True
|
||||
@ -115,7 +120,7 @@ class ZfsNode(ExecuteNode):
|
||||
self._progress_total_bytes = int(progress_fields[2])
|
||||
elif progress_fields[0] == 'incremental':
|
||||
self._progress_total_bytes = int(progress_fields[3])
|
||||
else:
|
||||
elif progress_fields[1].isnumeric():
|
||||
bytes_ = int(progress_fields[1])
|
||||
if self._progress_total_bytes:
|
||||
percentage = min(100, int(bytes_ * 100 / self._progress_total_bytes))
|
||||
@ -123,9 +128,8 @@ class ZfsNode(ExecuteNode):
|
||||
bytes_left = self._progress_total_bytes - bytes_
|
||||
minutes_left = int((bytes_left / (bytes_ / (time.time() - self._progress_start_time))) / 60)
|
||||
|
||||
print(">>> {}% {}MB/s (total {}MB, {} minutes left) \r".format(percentage, speed, int(
|
||||
self._progress_total_bytes / (1024 * 1024)), minutes_left), end='', file=sys.stderr)
|
||||
sys.stderr.flush()
|
||||
self.logger.progress("Transfer {}% {}MB/s (total {}MB, {} minutes left)".format(percentage, speed, int(
|
||||
self._progress_total_bytes / (1024 * 1024)), minutes_left))
|
||||
|
||||
return
|
||||
|
||||
@ -135,8 +139,8 @@ class ZfsNode(ExecuteNode):
|
||||
else:
|
||||
self.error(prefix + line.rstrip())
|
||||
|
||||
def _parse_stderr_pipe(self, line, hide_errors):
|
||||
self.parse_zfs_progress(line, hide_errors, "STDERR|> ")
|
||||
# def _parse_stderr_pipe(self, line, hide_errors):
|
||||
# self.parse_zfs_progress(line, hide_errors, "STDERR|> ")
|
||||
|
||||
def _parse_stderr(self, line, hide_errors):
|
||||
self.parse_zfs_progress(line, hide_errors, "STDERR > ")
|
||||
@ -147,6 +151,9 @@ class ZfsNode(ExecuteNode):
|
||||
def error(self, txt):
|
||||
self.logger.error("{} {}".format(self.description, txt))
|
||||
|
||||
def warning(self, txt):
|
||||
self.logger.warning("{} {}".format(self.description, txt))
|
||||
|
||||
def debug(self, txt):
|
||||
self.logger.debug("{} {}".format(self.description, txt))
|
||||
|
||||
@ -193,8 +200,7 @@ class ZfsNode(ExecuteNode):
|
||||
self.verbose("Creating snapshots {} in pool {}".format(snapshot_name, pool_name))
|
||||
self.run(cmd, readonly=False)
|
||||
|
||||
@CachedProperty
|
||||
def selected_datasets(self):
|
||||
def selected_datasets(self, exclude_received, exclude_paths):
|
||||
"""determine filesystems that should be backupped by looking at the special autobackup-property, systemwide
|
||||
|
||||
returns: list of ZfsDataset
|
||||
@ -204,35 +210,32 @@ class ZfsNode(ExecuteNode):
|
||||
|
||||
# get all source filesystems that have the backup property
|
||||
lines = self.run(tab_split=True, readonly=True, cmd=[
|
||||
"zfs", "get", "-t", "volume,filesystem", "-o", "name,value,source", "-s", "local,inherited", "-H",
|
||||
"zfs", "get", "-t", "volume,filesystem", "-o", "name,value,source", "-H",
|
||||
"autobackup:" + self.backup_name
|
||||
])
|
||||
|
||||
# determine filesystems that should be actually backupped
|
||||
# The returnlist of selected ZfsDataset's:
|
||||
selected_filesystems = []
|
||||
direct_filesystems = []
|
||||
|
||||
# list of sources, used to resolve inherited sources
|
||||
sources = {}
|
||||
|
||||
for line in lines:
|
||||
(name, value, source) = line
|
||||
(name, value, raw_source) = line
|
||||
dataset = ZfsDataset(self, name)
|
||||
|
||||
if value == "false":
|
||||
dataset.verbose("Ignored (disabled)")
|
||||
# "resolve" inherited sources
|
||||
sources[name] = raw_source
|
||||
if raw_source.find("inherited from ") == 0:
|
||||
inherited = True
|
||||
inherited_from = re.sub("^inherited from ", "", raw_source)
|
||||
source = sources[inherited_from]
|
||||
else:
|
||||
inherited = False
|
||||
source = raw_source
|
||||
|
||||
else:
|
||||
if source == "local" and (value == "true" or value == "child"):
|
||||
direct_filesystems.append(name)
|
||||
|
||||
if source == "local" and value == "true":
|
||||
dataset.verbose("Selected (direct selection)")
|
||||
# determine it
|
||||
if dataset.is_selected(value=value, source=source, inherited=inherited, exclude_received=exclude_received, exclude_paths=exclude_paths):
|
||||
selected_filesystems.append(dataset)
|
||||
elif source.find("inherited from ") == 0 and (value == "true" or value == "child"):
|
||||
inherited_from = re.sub("^inherited from ", "", source)
|
||||
if inherited_from in direct_filesystems:
|
||||
selected_filesystems.append(dataset)
|
||||
dataset.verbose("Selected (inherited selection)")
|
||||
else:
|
||||
dataset.debug("Ignored (already a backup)")
|
||||
else:
|
||||
dataset.verbose("Ignored (only childs)")
|
||||
|
||||
return selected_filesystems
|
||||
@ -45,7 +45,6 @@ class ZfsPool():
|
||||
ret = {}
|
||||
|
||||
for pair in self.zfs_node.run(tab_split=True, cmd=cmd, readonly=True, valid_exitcodes=[0]):
|
||||
if len(pair) == 4:
|
||||
ret[pair[1]] = pair[2]
|
||||
|
||||
return ret
|
||||
|
||||
69
zfs_autobackup/compressors.py
Normal file
69
zfs_autobackup/compressors.py
Normal file
@ -0,0 +1,69 @@
|
||||
# Adopted from Syncoid :)
|
||||
|
||||
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
|
||||
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
|
||||
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
|
||||
|
||||
COMPRESS_CMDS = {
|
||||
'gzip': {
|
||||
'cmd': 'gzip',
|
||||
'args': [ '-3' ],
|
||||
'dcmd': 'zcat',
|
||||
'dargs': [],
|
||||
},
|
||||
'pigz-fast': {
|
||||
'cmd': 'pigz',
|
||||
'args': [ '-3' ],
|
||||
'dcmd': 'pigz',
|
||||
'dargs': [ '-dc' ],
|
||||
},
|
||||
'pigz-slow': {
|
||||
'cmd': 'pigz',
|
||||
'args': [ '-9' ],
|
||||
'dcmd': 'pigz',
|
||||
'dargs': [ '-dc' ],
|
||||
},
|
||||
'zstd-fast': {
|
||||
'cmd': 'zstdmt',
|
||||
'args': [ '-3' ],
|
||||
'dcmd': 'zstdmt',
|
||||
'dargs': [ '-dc' ],
|
||||
},
|
||||
'zstd-slow': {
|
||||
'cmd': 'zstdmt',
|
||||
'args': [ '-19' ],
|
||||
'dcmd': 'zstdmt',
|
||||
'dargs': [ '-dc' ],
|
||||
},
|
||||
'xz': {
|
||||
'cmd': 'xz',
|
||||
'args': [],
|
||||
'dcmd': 'xz',
|
||||
'dargs': [ '-d' ],
|
||||
},
|
||||
'lzo': {
|
||||
'cmd': 'lzop',
|
||||
'args': [],
|
||||
'dcmd': 'lzop',
|
||||
'dargs': [ '-dfc' ],
|
||||
},
|
||||
'lz4': {
|
||||
'cmd': 'lz4',
|
||||
'args': [],
|
||||
'dcmd': 'lz4',
|
||||
'dargs': [ '-dc' ],
|
||||
},
|
||||
}
|
||||
|
||||
def compress_cmd(compressor):
|
||||
ret=[ COMPRESS_CMDS[compressor]['cmd'] ]
|
||||
ret.extend( COMPRESS_CMDS[compressor]['args'])
|
||||
return ret
|
||||
|
||||
def decompress_cmd(compressor):
|
||||
ret= [ COMPRESS_CMDS[compressor]['dcmd'] ]
|
||||
ret.extend(COMPRESS_CMDS[compressor]['dargs'])
|
||||
return ret
|
||||
|
||||
def choices():
|
||||
return COMPRESS_CMDS.keys()
|
||||
Reference in New Issue
Block a user