Compare commits

...

68 Commits

Author SHA1 Message Date
539b043a9b Added more tips 2024-08-01 17:47:59 +02:00
b38a717b43 Update README.md 2023-12-02 08:45:47 +01:00
6d4f22b69e Revert "wip"
This reverts commit de3dff77b8.
2023-11-22 11:13:02 +01:00
7122dc92af Update CliBase.py 2023-11-02 23:01:36 +01:00
843b87f319 Rename chunk-size to buffer-chunk-size 2023-10-16 11:20:02 +02:00
7feae675a6 Implement chunk size argument and refactor mbuffer command generation
Fixes #203
2023-10-16 11:20:02 +02:00
7586cacb49 Update README.md 2023-10-15 19:59:06 +02:00
e0c09e9975 Update README.md 2023-10-15 19:58:24 +02:00
de3dff77b8 wip 2023-10-15 16:16:05 +02:00
a62e793247 add obscure kernel test 2023-10-12 22:19:30 +02:00
439ea6a3bc more tests 2023-10-04 13:29:39 +02:00
ff86e3c67f improve ssh speed during testing 2023-10-03 12:49:32 +02:00
8b8be80ab7 tests can be run in a dockercontainer now. (just start ./tests/run_tests_docker to magically do it) changed time patching during testing to use mocktime() instead. fixed alpine issues. fixed #206 2023-10-02 23:15:37 +02:00
5cca819916 central timehandling and better mocking during test 2023-10-02 16:29:46 +02:00
477e980ba2 more test fixing 2023-09-28 00:00:10 +02:00
b817df8779 fix test 2 2023-09-27 23:47:54 +02:00
46580fb500 fix test 2023-09-27 23:43:37 +02:00
aa2c283746 various automount fixes 2023-09-27 23:34:29 +02:00
16ab4f8183 dont automount/read props in testmode 2023-09-27 01:32:29 +02:00
50f8aba101 dont automount when encryption is enabled but no key is loaded 2023-09-27 01:22:30 +02:00
771127d34a fix #112, pretty big change in mounting behaviour 2023-09-27 00:52:06 +02:00
ea8beee7c8 analyse missing debug output 2023-09-26 22:17:17 +02:00
defbc2d0bf added --include-received to overrule auto enabling of exclude received. :) fix #150 2023-09-26 22:01:02 +02:00
4e4de2de5a fix #195 2023-09-26 21:49:13 +02:00
de898fc258 fix #217 2023-09-26 21:36:03 +02:00
bdc156e48d types 2023-09-26 21:24:19 +02:00
f3caca48f2 transfer output is now in the form of source -> target 2023-09-26 19:17:00 +02:00
c0a8cb33ad less verbose output when not finding common snapshot 2023-09-26 19:04:39 +02:00
feb3972cd7 better output 2023-09-26 19:00:19 +02:00
e30a393d0e cleaner error output when destroy-incompatible fails 2023-09-26 18:51:24 +02:00
f8cd77e6e4 --destroy-incompatible now only rolls back if needed 2023-09-26 18:39:03 +02:00
06420978d5 better output 2023-09-26 18:23:08 +02:00
54e590175d readability of yellow notes on white terminals :) 2023-09-26 18:17:34 +02:00
6e5a6764c5 fix #219 2023-09-26 18:01:09 +02:00
a7d05a7064 allow disabling guid-checking as well, for performance 2023-09-26 17:13:40 +02:00
d90ea7edd2 reduce number of dataset exist-checks 2023-09-26 16:52:48 +02:00
090a2d1343 update version 2023-09-26 16:18:50 +02:00
7cffec1d26 check guid of common snapshot, fix #218 2023-09-26 16:16:32 +02:00
aac62f3fe6 issue tempalte 2023-09-25 17:45:03 +02:00
a12b651d17 only publish python 3 2023-08-29 15:53:49 +02:00
62f078eaec github ubuntu doesnt support testing python2 anymore 2023-08-29 15:39:54 +02:00
fd1e7d5b33 fix 2023-08-29 15:33:49 +02:00
03ff730a70 release 3.2 2023-08-29 15:26:53 +02:00
2c5d3c50e1 disable incomplete stuff 2023-08-29 15:23:27 +02:00
ee1d17b6ff Update issue.md 2023-06-19 09:59:08 +02:00
0ff989691f Update issue.md 2023-06-19 09:58:31 +02:00
088710fd39 Update issue.md 2023-06-19 09:57:34 +02:00
c12d63470f Update issue templates 2023-06-19 09:55:11 +02:00
4df9e52a97 Merge branch 'master' of github.com:psy0rz/zfs_autobackup 2023-04-04 16:49:22 +02:00
3155702c47 doc 2023-04-04 16:48:57 +02:00
a77fc9afe7 Create dependabot.yml 2023-04-04 16:44:15 +02:00
7533d9bcc2 fix codeql 2023-04-04 16:39:04 +02:00
bc57ee8d08 fixes actions 2023-04-04 16:36:43 +02:00
be53c454da workaround python 3.10 unittest2 problem 2023-04-04 16:27:18 +02:00
8a6a62ce9c fix github workflow 2023-04-04 16:13:42 +02:00
428e6edc13 fix some tests 2023-04-04 16:10:58 +02:00
23fbafab42 ubuntu18 is no longer available.
add ubuntu 22 support

test python 2 in ubuntu 20
2023-04-04 14:20:19 +02:00
cdd151d45f fix #190. --exclude-received now expects number of bytes instead that have to be changed for a dataset to not get excluded. (default 0)
this also makes it so that it doesnt conflict with --allow-empty.

added regression tests for exclude-unchanged as well.
2023-04-04 14:10:58 +02:00
ab43689a0f Update README.md 2023-03-22 22:35:13 +01:00
535e21863b Update README.md 2023-03-20 10:22:03 +01:00
a078be3e9e Fix reference string checked by destroymissing test 2023-02-13 14:17:33 +01:00
00b230792a Fix a few typos in user-facing messages 2023-02-13 14:17:33 +01:00
8b600c9e9c Short options for send so as to support older zfs versions 2023-02-13 14:17:33 +01:00
60840a4213 Adding Bytes/sec unit to --rate
Usually network transfers are specified in bits/sec but this is using mbuffer -R option which is Bytes/sec.
2023-01-31 20:39:05 +01:00
7f91473188 Update README.md 2022-12-02 00:26:48 +01:00
e106e7f1da Update README.md 2022-12-02 00:25:50 +01:00
d531e9fdaf fix 2022-10-21 18:16:44 +02:00
a322eb96ae crude encryption speed test 2022-10-21 18:14:46 +02:00
40 changed files with 1124 additions and 622 deletions

11
.github/ISSUE_TEMPLATE/issue.md vendored Normal file
View File

@ -0,0 +1,11 @@
---
name: Issue
about: 'Use this if you have issues or feature requests'
title: ''
labels: ''
assignees: ''
---
(Please add the commandline that you use to the issue. AT LEAST add the output of --verbose, but usual --debug is needed as well. Sometimes it helps if you add the output of --debug-output instead, but its huge, so use an attachment for that.)

11
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "python" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"

View File

@ -17,8 +17,8 @@ on:
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '26 23 * * 3'
# schedule:
# - cron: '26 23 * * 3'
jobs:
analyze:
@ -40,9 +40,10 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
@ -53,7 +54,7 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
@ -67,4 +68,4 @@ jobs:
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1
uses: github/codeql-action/analyze@v2

View File

@ -20,20 +20,20 @@ jobs:
with:
python-version: '3.x'
- name: Set up Python 2.x
uses: actions/setup-python@v2
with:
python-version: '2.x'
# - name: Set up Python 2.x
# uses: actions/setup-python@v2
# with:
# python-version: '2.x'
- name: Install dependencies 3.x
run: |
python -m pip install --upgrade pip
pip3 install setuptools wheel twine
- name: Install dependencies 2.x
run: |
python2 -m pip install --upgrade pip
pip2 install setuptools wheel twine
# - name: Install dependencies 2.x
# run: |
# python2 -m pip install --upgrade pip
# pip2 install setuptools wheel twine
- name: Build and publish
env:
@ -41,6 +41,6 @@ jobs:
TWINE_PASSWORD: ${{ secrets.TWINE_PASSWORD }}
run: |
python3 setup.py sdist bdist_wheel
python2 setup.py sdist bdist_wheel
# python2 setup.py sdist bdist_wheel
twine check dist/*
twine upload dist/*

View File

@ -6,15 +6,12 @@ on: ["push", "pull_request"]
jobs:
ubuntu20:
runs-on: ubuntu-20.04
ubuntu22:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v2.3.4
uses: actions/checkout@v3.5.0
- name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux lzop pigz zstd gzip xz-utils lz4 mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
@ -29,17 +26,15 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github || true
ubuntu18:
runs-on: ubuntu-18.04
ubuntu20:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@v2.3.4
uses: actions/checkout@v3.5.0
- name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux python3-setuptools lzop pigz zstd gzip xz-utils liblz4-tool mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
run: sudo apt update && sudo apt install zfsutils-linux lzop pigz zstd gzip xz-utils lz4 mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
- name: Regression test
@ -51,26 +46,3 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github || true
ubuntu18_python2:
runs-on: ubuntu-18.04
steps:
- name: Checkout
uses: actions/checkout@v2.3.4
- name: Set up Python 2.x
uses: actions/setup-python@v2
with:
python-version: '2.x'
- name: Prepare
run: sudo apt update && sudo apt install zfsutils-linux python-setuptools lzop pigz zstd gzip xz-utils liblz4-tool mbuffer && sudo -H pip install coverage unittest2 mock==3.0.5 coveralls colorama
- name: Regression test
run: sudo -E ./tests/run_tests
- name: Coveralls
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COVERALLS_REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github || true

View File

@ -14,14 +14,14 @@ You can select what to backup by setting a custom `ZFS property`. This makes it
Other settings are just specified on the commandline: Simply setup and test your zfs-autobackup command and fix all the issues you might encounter. When you're done you can just copy/paste your command to a cron or script.
Since it's using ZFS commands, you can see what it's actually doing by specifying `--debug`. This also helps a lot if you run into some strange problem or error. You can just copy-paste the command that fails and play around with it on the commandline. (something I missed in other tools)
Since it's using ZFS commands, you can see what it's actually doing by specifying `--debug`. This also helps a lot if you run into some strange problem or errors. You can just copy-paste the command that fails and play around with it on the commandline. (something I missed in other tools)
An important feature that's missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your system.
## Features
* Works across operating systems: Tested with **Linux**, **FreeBSD/FreeNAS** and **SmartOS**.
* Low learning curve: no complex daemons or services, no additional software or networking needed. (Only read this page)
* Low learning curve: no complex daemons or services, no additional software or networking needed.
* Plays nicely with existing replication systems. (Like Proxmox HA)
* Automatically selects filesystems to backup by looking at a simple ZFS property.
* Creates consistent snapshots. (takes all snapshots at once, atomicly.)
@ -31,6 +31,7 @@ An important feature that's missing from other tools is a reliable `--test` opti
* "pull" remote data from a server via SSH and backup it locally.
* "pull+push": Zero trust between source and target.
* Can be scheduled via simple cronjob or run directly from commandline.
* Also supports complex backup geometries.
* ZFS encryption support: Can decrypt / encrypt or even re-encrypt datasets during transfer.
* Supports sending with compression. (Using pigz, zstd etc)
* IO buffering to speed up transfer.
@ -45,6 +46,7 @@ An important feature that's missing from other tools is a reliable `--test` opti
* Easy migration from other zfs backup systems to zfs-autobackup.
* Gracefully handles datasets that no longer exist on source.
* Complete and clean logging.
* All code is regression tested against actual ZFS environments.
* Easy installation:
* Just install zfs-autobackup via pip.
* Only needs to be installed on one side.
@ -55,8 +57,18 @@ An important feature that's missing from other tools is a reliable `--test` opti
Please look at our wiki to [Get started](https://github.com/psy0rz/zfs_autobackup/wiki).
# Sponsor list
Or read the [Full manual](https://github.com/psy0rz/zfs_autobackup/wiki/Manual)
This project was sponsorred by:
# Tips
* JetBrains (Provided me with a license for their whole professional product line, https://www.jetbrains.com/pycharm/ )
To release files that are blocked, use this command if you want to delete
```sh
zfs list -t snap -o name | grep <dataset> | xargs -n 1 zfs release -r zfs_autobackup:offsite1
```
If delete fails after, check other holds on the snapshot
```sh
zfs holds path@snapshotname
```

1
scripts/autoupload Executable file
View File

@ -0,0 +1 @@
find zfs_autobackup | entr rsync -avx . "$1":zfs_autobackup

33
scripts/enctest Executable file
View File

@ -0,0 +1,33 @@
#!/bin/bash
#NOTE: usually the speed is the same, but the cpu usage is much higher for ccm
set -e
D=/enctest123
DS=rpool$D
echo sdflsakjfklsjfsda > key.txt
dd if=/dev/urandom of=dump.bin bs=1M count=10000
#readcache
cat dump.bin > /dev/null
zfs destroy $DS || true
zfs create $DS
echo Unencrypted:
sync
time ( cp dump.bin $D/dump.bin; sync )
for E in aes-128-ccm aes-192-ccm aes-256-ccm aes-128-gcm aes-192-gcm aes-256-gcm; do
zfs destroy $DS
zfs create -o encryption=$E -o keylocation=file://`pwd`/key.txt -o keyformat=passphrase $DS
echo $E
sync
time ( cp dump.bin $D/dump.bin; sync )
done

17
tests/Dockerfile Normal file
View File

@ -0,0 +1,17 @@
FROM alpine:3.18
#base packages
RUN apk update
RUN apk add py3-pip
#zfs autobackup tests dependencies
RUN apk add zfs openssh lzop pigz zstd gzip xz lz4 mbuffer udev zfs-udev
#python modules
COPY requirements.txt /
RUN pip3 install -r requirements.txt
#git repo should be mounted in /app:
ENTRYPOINT [ "/app/tests/tests_docker" ]

3
tests/autorun_tests_docker Executable file
View File

@ -0,0 +1,3 @@
#!/bin/sh
find tests zfs_autobackup -name '*.py' |entr ./tests/run_tests_docker $@

View File

@ -2,6 +2,17 @@
# To run tests as non-root, use this hack:
# chmod 4755 /usr/sbin/zpool /usr/sbin/zfs
import sys
import zfs_autobackup.util
#dirty hack for this error:
#AttributeError: module 'collections' has no attribute 'MutableMapping'
if sys.version_info.major == 3 and sys.version_info.minor >= 10:
import collections
setattr(collections, "MutableMapping", collections.abc.MutableMapping)
import subprocess
import random
@ -19,15 +30,17 @@ import contextlib
import sys
import io
import datetime
TEST_POOLS="test_source1 test_source2 test_target1"
ZFS_USERSPACE= subprocess.check_output("dpkg-query -W zfsutils-linux |cut -f2", shell=True).decode('utf-8').rstrip()
ZFS_KERNEL= subprocess.check_output("modinfo zfs|grep ^version |sed 's/.* //'", shell=True).decode('utf-8').rstrip()
# ZFS_USERSPACE= subprocess.check_output("dpkg-query -W zfsutils-linux |cut -f2", shell=True).decode('utf-8').rstrip()
# ZFS_KERNEL= subprocess.check_output("modinfo zfs|grep ^version |sed 's/.* //'", shell=True).decode('utf-8').rstrip()
print("###########################################")
print("#### Unit testing against:")
print("#### Python :"+sys.version.replace("\n", " "))
print("#### ZFS userspace :"+ZFS_USERSPACE)
print("#### ZFS kernel :"+ZFS_KERNEL)
print("#### Python : "+sys.version.replace("\n", " "))
print("#### ZFS version : "+subprocess.check_output("zfs --version", shell=True).decode('utf-8').rstrip().replace('\n', ' '))
print("#############################################")
@ -64,7 +77,7 @@ def redirect_stderr(target):
def shelltest(cmd):
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
ret=(subprocess.check_output("SUDO_ASKPASS=./password.sh sudo -A "+cmd , shell=True).decode('utf-8'))
ret=(subprocess.check_output(cmd , shell=True).decode('utf-8'))
print("######### result of: {}".format(cmd))
print(ret)
@ -76,7 +89,7 @@ def prepare_zpools():
print("Preparing zfs filesystems...")
#need ram blockdevice
subprocess.check_call("modprobe brd rd_size=512000", shell=True)
# subprocess.check_call("modprobe brd rd_size=512000", shell=True)
#remove old stuff
subprocess.call("zpool destroy test_source1 2>/dev/null", shell=True)
@ -96,3 +109,18 @@ def prepare_zpools():
subprocess.check_call("zfs set autobackup:test=child test_source2/fs2", shell=True)
print("Prepare done")
@contextlib.contextmanager
def mocktime(time_str, format="%Y%m%d%H%M%S"):
def fake_datetime_now():
return datetime.datetime.strptime(time_str, format)
with patch.object(zfs_autobackup.util,'datetime_now_mock', fake_datetime_now()):
yield

View File

@ -18,6 +18,17 @@ if ! [ -e /root/.ssh/id_rsa ]; then
ssh -oStrictHostKeyChecking=no localhost true || exit 1
fi
cat >> ~/.ssh/config <<EOF
Host *
addkeystoagent yes
controlpath ~/.ssh/control-master-%r@%h:%p
controlmaster auto
controlpersist 3600
EOF
modprobe brd rd_size=512000
umount /tmp/ZfsCheck*
coverage run --branch --source zfs_autobackup -m unittest discover -vvvvf $SCRIPTDIR $@ 2>&1

16
tests/run_tests_docker Executable file
View File

@ -0,0 +1,16 @@
#!/bin/sh
set -e
#remove stuff from previous local tests
zpool destroy test_source1 2>/dev/null || true
zpool destroy test_source2 2>/dev/null || true
zpool destroy test_target1 2>/dev/null || true
#is needed
modprobe brd rd_size=512000 || true
# builds and starts a docker container to run the test suite
docker build -t zfs-autobackup-test -f tests/Dockerfile .
docker run --name zfs-autobackup-test --privileged --rm -it -v .:/app zfs-autobackup-test $@

View File

@ -9,11 +9,11 @@ class TestCmdPipe(unittest2.TestCase):
p=CmdPipe(readonly=False, inp=None)
err=[]
out=[]
p.add(CmdItem(["ls", "-d", "/", "/", "/nonexistent"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2), stdout_handler=lambda line: out.append(line)))
p.add(CmdItem(["sh", "-c", "echo out1;echo err1 >&2; echo out2; echo err2 >&2"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0), stdout_handler=lambda line: out.append(line)))
executed=p.execute()
self.assertEqual(err, ["ls: cannot access '/nonexistent': No such file or directory"])
self.assertEqual(out, ["/","/"])
self.assertEqual(out, ["out1", "out2"])
self.assertEqual(err, ["err1","err2"])
self.assertIsNone(executed)
def test_input(self):
@ -56,16 +56,16 @@ class TestCmdPipe(unittest2.TestCase):
err2=[]
err3=[]
out=[]
p.add(CmdItem(["ls", "/nonexistent1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["ls", "/nonexistent2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["ls", "/nonexistent3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2), stdout_handler=lambda line: out.append(line)))
p.add(CmdItem(["sh", "-c", "echo err1 >&2"], stderr_handler=lambda line: err1.append(line), ))
p.add(CmdItem(["sh", "-c", "echo err2 >&2"], stderr_handler=lambda line: err2.append(line), ))
p.add(CmdItem(["sh", "-c", "echo err3 >&2"], stderr_handler=lambda line: err3.append(line), stdout_handler=lambda line: out.append(line)))
executed=p.execute()
self.assertEqual(err1, ["ls: cannot access '/nonexistent1': No such file or directory"])
self.assertEqual(err2, ["ls: cannot access '/nonexistent2': No such file or directory"])
self.assertEqual(err3, ["ls: cannot access '/nonexistent3': No such file or directory"])
self.assertEqual(err1, ["err1"])
self.assertEqual(err2, ["err2"])
self.assertEqual(err3, ["err3"])
self.assertEqual(out, [])
self.assertIsNone(executed)
self.assertTrue(executed)
def test_exitcode(self):
"""test piped exitcodes """
@ -74,9 +74,9 @@ class TestCmdPipe(unittest2.TestCase):
err2=[]
err3=[]
out=[]
p.add(CmdItem(["bash", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1)))
p.add(CmdItem(["bash", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["bash", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3), stdout_handler=lambda line: out.append(line)))
p.add(CmdItem(["sh", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1)))
p.add(CmdItem(["sh", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
p.add(CmdItem(["sh", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3), stdout_handler=lambda line: out.append(line)))
executed=p.execute()
self.assertEqual(err1, [])

View File

@ -13,10 +13,10 @@ class TestZfsNode(unittest2.TestCase):
def test_destroymissing(self):
#initial backup
with patch('time.strftime', return_value="test-19101111000000"): #1000 years in past
with mocktime("19101111000000"): #1000 years in past
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000000"): #far in past
with mocktime("20101111000000"): #far in past
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
@ -117,7 +117,7 @@ class TestZfsNode(unittest2.TestCase):
print(buf.getvalue())
#on second run it sees the dangling ex-parent but doesnt know what to do with it (since it has no own snapshot)
self.assertIn("test_source1: Destroy missing: has no snapshots made by us.", buf.getvalue())
self.assertIn("test_source1: Destroy missing: has no snapshots made by us", buf.getvalue())

View File

@ -49,11 +49,11 @@ class TestZfsEncryption(unittest2.TestCase):
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsourcekeyless", unload_key=True) # raw mode shouldn't need a key
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
@ -86,11 +86,11 @@ test_target1/test_source2/fs2/sub encryption
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
@ -121,13 +121,13 @@ test_target1/test_source2/fs2/sub encryptionroot -
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received".split(" ")).run())
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received".split(" ")).run())
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(" ")).run())
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
self.assertEqual(r, """
@ -156,14 +156,14 @@ test_target1/test_source2/fs2/sub encryptionroot -
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
self.assertFalse(ZfsAutobackup(
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot --exclude-received".split(
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(
" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
self.assertFalse(ZfsAutobackup(
@ -191,3 +191,99 @@ test_target1/test_source2/fs2 encryptionroot -
test_target1/test_source2/fs2/sub encryptionroot - -
""")
def test_raw_invalid_snapshot(self):
"""in raw mode, its not allowed to have any newer snaphots on target, #219"""
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress".split(" ")).run())
#this is invalid in raw mode
shelltest("zfs snapshot test_target1/test_source1/fs1/encryptedsource@incompatible")
with mocktime("20101111000001"):
#should fail because of incompatble snapshot
self.assertEqual(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty".split(" ")).run(),1)
#should destroy incompatible and continue
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --no-snapshot --destroy-incompatible".split(" ")).run())
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
self.assertMultiLineEqual(r,"""
NAME PROPERTY VALUE SOURCE
test_target1 encryptionroot - -
test_target1/test_source1 encryptionroot - -
test_target1/test_source1/fs1 encryptionroot - -
test_target1/test_source1/fs1/encryptedsource encryptionroot test_target1/test_source1/fs1/encryptedsource -
test_target1/test_source1/fs1/sub encryptionroot - -
test_target1/test_source2 encryptionroot - -
test_target1/test_source2/fs2 encryptionroot - -
test_target1/test_source2/fs2/sub encryptionroot - -
""")
def test_resume_encrypt_with_no_key(self):
"""test what happens if target encryption key not loaded (this led to a kernel crash of freebsd with 2.1.x i think) while trying to resume"""
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
r = shelltest("zfs set compress=off test_source1 test_target1")
# big change on source
r = shelltest("dd if=/dev/zero of=/test_source1/fs1/data bs=250M count=1")
# waste space on target
r = shelltest("dd if=/dev/zero of=/test_target1/waste bs=250M count=1")
# should fail and leave resume token
with mocktime("20101111000001"):
self.assertTrue(ZfsAutobackup(
"test test_target1/encryptedtarget --verbose --no-progress --encrypt --exclude-received --allow-empty --clear-mountpoint".split(
" ")).run())
#
# free up space
r = shelltest("rm /test_target1/waste")
# sync
r = shelltest("zfs umount test_target1")
r = shelltest("zfs mount test_target1")
#
# #unload key
shelltest("zfs unload-key test_target1/encryptedtarget")
# resume
with mocktime("20101111000001"):
self.assertEqual(ZfsAutobackup(
"test test_target1/encryptedtarget --verbose --no-progress --encrypt --exclude-received --allow-empty --no-snapshot --clear-mountpoint".split(
" ")).run(),3)
r = shelltest("zfs get -r -t all encryptionroot test_target1")
self.assertEqual(r, """
NAME PROPERTY VALUE SOURCE
test_target1 encryptionroot - -
test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source1/fs1 encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source1/fs1@test-20101111000000 encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
test_target1/encryptedtarget/test_source1/fs1/encryptedsource@test-20101111000000 encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
test_target1/encryptedtarget/test_source1/fs1/encryptedsource@test-20101111000001 encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source1/fs1/sub@test-20101111000000 encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot test_target1/encryptedtarget -
test_target1/encryptedtarget/test_source2/fs2/sub@test-20101111000000 encryptionroot test_target1/encryptedtarget -
""")

View File

@ -33,9 +33,9 @@ class TestExecuteNode(unittest2.TestCase):
#return std err as well, trigger stderr by listing something non existing
with self.subTest("stderr return"):
(stdout, stderr)=node.run(["ls", "nonexistingfile"], return_stderr=True, valid_exitcodes=[2])
(stdout, stderr)=node.run(["sh", "-c", "echo bla >&2"], return_stderr=True, valid_exitcodes=[0])
self.assertEqual(stdout,[])
self.assertRegex(stderr[0],"nonexistingfile")
self.assertRegex(stderr[0],"bla")
#slow command, make sure things dont exit too early
with self.subTest("early exit test"):
@ -110,19 +110,17 @@ class TestExecuteNode(unittest2.TestCase):
with self.subTest("check stderr on pipe output side"):
output=nodea.run(["true"], pipe=True, valid_exitcodes=[0])
(stdout, stderr)=nodeb.run(["ls", "nonexistingfile"], inp=output, return_stderr=True, valid_exitcodes=[2])
(stdout, stderr)=nodeb.run(["sh", "-c", "echo bla >&2"], inp=output, return_stderr=True, valid_exitcodes=[0])
self.assertEqual(stdout,[])
self.assertRegex(stderr[0], "nonexistingfile" )
self.assertRegex(stderr[0], "bla" )
with self.subTest("check stderr on pipe input side (should be only printed)"):
output=nodea.run(["ls", "nonexistingfile"], pipe=True, valid_exitcodes=[2])
output=nodea.run(["sh", "-c", "echo bla >&2"], pipe=True, valid_exitcodes=[0])
(stdout, stderr)=nodeb.run(["true"], inp=output, return_stderr=True, valid_exitcodes=[0])
self.assertEqual(stdout,[])
self.assertEqual(stderr,[])
def test_pipe_local_local(self):
nodea=ExecuteNode(debug_output=True)
nodeb=ExecuteNode(debug_output=True)
@ -209,5 +207,3 @@ class TestExecuteNode(unittest2.TestCase):
if __name__ == '__main__':
unittest.main()

View File

@ -32,7 +32,7 @@ class TestExternalFailures(unittest2.TestCase):
def test_initial_resume(self):
# inital backup, leaves resume token
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.generate_resume()
# --test should resume and succeed
@ -42,11 +42,6 @@ class TestExternalFailures(unittest2.TestCase):
print(buf.getvalue())
# did we really resume?
if "0.6.5" in ZFS_USERSPACE:
# abort this late, for beter coverage
self.skipTest("Resume not supported in this ZFS userspace version")
else:
self.assertIn(": resuming", buf.getvalue())
# should resume and succeed
@ -56,11 +51,6 @@ class TestExternalFailures(unittest2.TestCase):
print(buf.getvalue())
# did we really resume?
if "0.6.5" in ZFS_USERSPACE:
# abort this late, for beter coverage
self.skipTest("Resume not supported in this ZFS userspace version")
else:
self.assertIn(": resuming", buf.getvalue())
r = shelltest("zfs list -H -o name -r -t all test_target1")
@ -81,11 +71,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_incremental_resume(self):
# initial backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
# incremental backup leaves resume token
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.generate_resume()
# --test should resume and succeed
@ -95,11 +85,6 @@ test_target1/test_source2/fs2/sub@test-20101111000000
print(buf.getvalue())
# did we really resume?
if "0.6.5" in ZFS_USERSPACE:
# abort this late, for beter coverage
self.skipTest("Resume not supported in this ZFS userspace version")
else:
self.assertIn(": resuming", buf.getvalue())
# should resume and succeed
@ -110,10 +95,6 @@ test_target1/test_source2/fs2/sub@test-20101111000000
print(buf.getvalue())
# did we really resume?
if "0.6.5" in ZFS_USERSPACE:
# abort this late, for beter coverage
self.skipTest("Resume not supported in this ZFS userspace version")
else:
self.assertIn(": resuming", buf.getvalue())
r = shelltest("zfs list -H -o name -r -t all test_target1")
@ -134,11 +115,9 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# generate an invalid resume token, and verify if its aborted automaticly
def test_initial_resumeabort(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
# inital backup, leaves resume token
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.generate_resume()
# remove corresponding source snapshot, so it becomes invalid
@ -148,11 +127,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
shelltest("zfs destroy test_target1/test_source1/fs1/sub; true")
# --test try again, should abort old resume
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
# try again, should abort old resume
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all test_target1")
@ -172,26 +151,23 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# generate an invalid resume token, and verify if its aborted automaticly
def test_incremental_resumeabort(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
# initial backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
# icremental backup, leaves resume token
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.generate_resume()
# remove corresponding source snapshot, so it becomes invalid
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
# --test try again, should abort old resume
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
# try again, should abort old resume
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all test_target1")
@ -212,22 +188,19 @@ test_target1/test_source2/fs2/sub@test-20101111000000
# create a resume situation, where the other side doesnt want the snapshot anymore ( should abort resume )
def test_abort_unwanted_resume(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
# generate resume
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.generate_resume()
with OutputIO() as buf:
with redirect_stdout(buf):
# incremental, doesnt want previous anymore
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-target=0 --allow-empty".split(" ")).run())
"test test_target1 --no-progress --verbose --keep-target=0 --allow-empty --debug".split(" ")).run())
print(buf.getvalue())
@ -250,14 +223,11 @@ test_target1/test_source2/fs2/sub@test-20101111000002
# test with empty snapshot list (this was a bug)
def test_abort_resume_emptysnapshotlist(self):
if "0.6.5" in ZFS_USERSPACE:
self.skipTest("Resume not supported in this ZFS userspace version")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
# generate resume
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.generate_resume()
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
@ -265,7 +235,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
with OutputIO() as buf:
with redirect_stdout(buf):
# incremental, doesnt want previous anymore
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --no-snapshot".split(
" ")).run())
@ -277,14 +247,14 @@ test_target1/test_source2/fs2/sub@test-20101111000002
def test_missing_common(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
# remove common snapshot and leave nothing
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
#UPDATE: offcourse the one thing that wasn't tested had a bug :( (in ExecuteNode.run()).
@ -295,7 +265,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
# #recreate target pool without any features
# # shelltest("zfs set compress=on test_source1; zpool destroy test_target1; zpool create test_target1 -o feature@project_quota=disabled /dev/ram2")
#
# with patch('time.strftime', return_value="test-20101111000000"):
# with mocktime("20101111000000"):
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --no-progress".split(" ")).run())
#
# r = shelltest("zfs list -H -o name -r -t all test_target1")

View File

@ -11,17 +11,17 @@ class TestZfsNode(unittest2.TestCase):
def test_keepsource0target10queuedsend(self):
"""Test if thinner doesnt destroy too much early on if there are no common snapshots YET. Issue #84"""
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
" ")).run())
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty".split(
" ")).run())
@ -65,7 +65,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
shelltest("zfs set autobackup:test=true test_target1/target_shouldnotbeexcluded")
shelltest("zfs create test_target1/target")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
"test test_target1/target --no-progress --verbose --allow-empty".split(
" ")).run())

View File

@ -33,26 +33,28 @@ class TestZfsScaling(unittest2.TestCase):
run_counter=0
with patch.object(ExecuteNode,'run', run_count) as p:
with patch('time.strftime', return_value="test-20101112000000"):
with mocktime("20101112000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
expected_runs=343
print("ACTUAL RUNS: {}".format(run_counter))
expected_runs=342
print("EXPECTED RUNS: {}".format(expected_runs))
print("ACTUAL RUNS : {}".format(run_counter))
self.assertLess(abs(run_counter-expected_runs), snapshot_count/2)
run_counter=0
with patch.object(ExecuteNode,'run', run_count) as p:
with patch('time.strftime', return_value="test-20101112000001"):
with mocktime("20101112000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
expected_runs=47
print("ACTUAL RUNS: {}".format(run_counter))
print("EXPECTED RUNS: {}".format(expected_runs))
print("ACTUAL RUNS : {}".format(run_counter))
self.assertLess(abs(run_counter-expected_runs), snapshot_count/2)
def test_manydatasets(self):
@ -73,12 +75,12 @@ class TestZfsScaling(unittest2.TestCase):
run_counter=0
with patch.object(ExecuteNode,'run', run_count) as p:
with patch('time.strftime', return_value="test-20101112000000"):
with mocktime("20101112000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
expected_runs=636
#this triggers if you make a change with an impact of more than O(snapshot_count/2)`
expected_runs=842
print("EXPECTED RUNS: {}".format(expected_runs))
print("ACTUAL RUNS: {}".format(run_counter))
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
@ -88,12 +90,12 @@ class TestZfsScaling(unittest2.TestCase):
run_counter=0
with patch.object(ExecuteNode,'run', run_count) as p:
with patch('time.strftime', return_value="test-20101112000001"):
with mocktime("20101112000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
expected_runs=842
expected_runs=1047
print("EXPECTED RUNS: {}".format(expected_runs))
print("ACTUAL RUNS: {}".format(run_counter))
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)

View File

@ -2,87 +2,145 @@ import zfs_autobackup.compressors
from basetest import *
import time
class TestSendRecvPipes(unittest2.TestCase):
"""test input/output pipes for zfs send and recv"""
def setUp(self):
prepare_zpools()
self.longMessage=True
self.longMessage = True
def test_send_basics(self):
"""send basics (remote/local send pipe)"""
with self.subTest("local local pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress", "--clear-mountpoint",
"--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote local pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-source=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("local remote pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote remote pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
with mocktime("20101111000003"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-source=localhost", "--ssh-target=localhost", "--send-pipe=dd bs=1M",
"--recv-pipe=dd bs=2M"]).run())
r = shelltest("zfs list -H -o name -r -t all test_target1")
self.assertMultiLineEqual(r, """
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20101111000000
test_target1/test_source1/fs1@test-20101111000001
test_target1/test_source1/fs1@test-20101111000002
test_target1/test_source1/fs1@test-20101111000003
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20101111000000
test_target1/test_source1/fs1/sub@test-20101111000001
test_target1/test_source1/fs1/sub@test-20101111000002
test_target1/test_source1/fs1/sub@test-20101111000003
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000000
test_target1/test_source2/fs2/sub@test-20101111000001
test_target1/test_source2/fs2/sub@test-20101111000002
test_target1/test_source2/fs2/sub@test-20101111000003
""")
def test_compress(self):
"""send basics (remote/local send pipe)"""
for compress in zfs_autobackup.compressors.COMPRESS_CMDS.keys():
with self.subTest("compress "+compress):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--compress="+compress]).run())
with self.subTest("compress " + compress):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--verbose",
"--compress=" + compress]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
def test_buffer(self):
"""test different buffer configurations"""
with self.subTest("local local pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--buffer=1M" ]).run())
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress", "--clear-mountpoint", "--buffer=1M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote local pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--buffer=1M"]).run())
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--verbose", "--exclude-received", "--no-holds",
"--no-progress", "--ssh-source=localhost", "--buffer=1M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("local remote pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-target=localhost", "--buffer=1M"]).run())
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-target=localhost", "--buffer=1M"]).run())
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
with self.subTest("remote remote pipe"):
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--ssh-source=localhost", "--ssh-target=localhost", "--buffer=1M"]).run())
with mocktime("20101111000003"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
"--ssh-source=localhost", "--ssh-target=localhost", "--buffer=1M"]).run())
r = shelltest("zfs list -H -o name -r -t all test_target1")
self.assertMultiLineEqual(r, """
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20101111000000
test_target1/test_source1/fs1@test-20101111000001
test_target1/test_source1/fs1@test-20101111000002
test_target1/test_source1/fs1@test-20101111000003
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20101111000000
test_target1/test_source1/fs1/sub@test-20101111000001
test_target1/test_source1/fs1/sub@test-20101111000002
test_target1/test_source1/fs1/sub@test-20101111000003
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000000
test_target1/test_source2/fs2/sub@test-20101111000001
test_target1/test_source2/fs2/sub@test-20101111000002
test_target1/test_source2/fs2/sub@test-20101111000003
""")
def test_rate(self):
"""test rate limit"""
start = time.time()
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup(
["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--rate=50k"]).run())
start=time.time()
with patch('time.strftime', return_value="test-20101111000000"):
self.assertFalse(ZfsAutobackup(["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--rate=50k" ]).run())
#not a great way of verifying but it works.
self.assertGreater(time.time()-start, 5)
# not a great way of verifying but it works.
self.assertGreater(time.time() - start, 5)

View File

@ -85,7 +85,7 @@ class TestThinner(unittest2.TestCase):
if random.random()>=0.5:
things.append(Thing(now))
(keeps, removes)=thinner.thin(things, now=now)
(keeps, removes)=thinner.thin(things, keep_objects=[], now=now)
things=keeps
@ -143,7 +143,7 @@ class TestThinner(unittest2.TestCase):
if random.random()>=0.5:
things.append(Thing(now))
(things, removes)=thinner.thin(things, now=now)
(things, removes)=thinner.thin(things, keep_objects=[], now=now)
result=[]
for thing in things:

View File

@ -38,7 +38,7 @@ class TestZfsVerify(unittest2.TestCase):
shelltest("dd if=/dev/urandom of=/dev/zvol/test_source1/fs1/bad_zvol count=1 bs=512k")
#create backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --no-holds".split(" ")).run())
#Do an ugly hack to create a fault in the bad filesystem

View File

@ -35,7 +35,7 @@ class TestZfsAutobackup(unittest2.TestCase):
def test_snapshotmode(self):
"""test snapshot tool mode"""
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -55,11 +55,12 @@ test_target1
""")
def test_defaults(self):
self.maxDiff=2000
with self.subTest("no datasets selected"):
with OutputIO() as buf:
with redirect_stderr(buf):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug --no-progress".split(" ")).run())
print(buf.getvalue())
@ -69,7 +70,7 @@ test_target1
with self.subTest("defaults with full verbose and debug"):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -98,7 +99,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
""")
with self.subTest("bare defaults, allow empty"):
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --no-progress".split(" ")).run())
@ -168,47 +169,43 @@ test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
""")
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
#So in this case we only want to see 2 snapshots of 2011, and none of the 2010's anymore.
with self.subTest("test time checking"):
with patch('time.strftime', return_value="test-20111111000000"):
with mocktime("20111211000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --no-progress".split(" ")).run())
time_str="20111112000000" #month in the "future"
future_timestamp=time_secs=time.mktime(time.strptime(time_str,"%Y%m%d%H%M%S"))
with patch('time.time', return_value=future_timestamp):
with patch('time.strftime', return_value="test-20111111000001"):
with mocktime("20111211000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertMultiLineEqual(r,"""
test_source1
test_source1/fs1
test_source1/fs1@test-20111111000000
test_source1/fs1@test-20111111000001
test_source1/fs1@test-20111211000000
test_source1/fs1@test-20111211000001
test_source1/fs1/sub
test_source1/fs1/sub@test-20111111000000
test_source1/fs1/sub@test-20111111000001
test_source1/fs1/sub@test-20111211000000
test_source1/fs1/sub@test-20111211000001
test_source2
test_source2/fs2
test_source2/fs2/sub
test_source2/fs2/sub@test-20111111000000
test_source2/fs2/sub@test-20111111000001
test_source2/fs2/sub@test-20111211000000
test_source2/fs2/sub@test-20111211000001
test_source2/fs3
test_source2/fs3/sub
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20111111000000
test_target1/test_source1/fs1@test-20111111000001
test_target1/test_source1/fs1@test-20111211000000
test_target1/test_source1/fs1@test-20111211000001
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20111111000000
test_target1/test_source1/fs1/sub@test-20111111000001
test_target1/test_source1/fs1/sub@test-20111211000000
test_target1/test_source1/fs1/sub@test-20111211000001
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20111111000000
test_target1/test_source2/fs2/sub@test-20111111000001
test_target1/test_source2/fs2/sub@test-20111211000000
test_target1/test_source2/fs2/sub@test-20111211000001
""")
def test_ignore_othersnaphots(self):
@ -216,7 +213,7 @@ test_target1/test_source2/fs2/sub@test-20111111000001
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -251,7 +248,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --other-snapshots".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -286,7 +283,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_nosnapshot(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -310,7 +307,7 @@ test_target1/test_source2/fs2
def test_nosend(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -333,7 +330,7 @@ test_target1
def test_ignorereplicated(self):
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --ignore-replicated".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -362,7 +359,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_noholds(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --no-progress".split(" ")).run())
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
@ -394,7 +391,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 0 -
def test_strippath(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1 --no-progress".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -437,10 +434,10 @@ test_target1/fs2/sub@test-20101111000000
r=shelltest("zfs set refreservation=1M test_source1/fs1")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-refreservation".split(" ")).run())
r=shelltest("zfs get refreservation -r test_source1 test_source2 test_target1")
r=shelltest("zfs get -r refreservation test_source1 test_source2 test_target1")
self.assertMultiLineEqual(r,"""
NAME PROPERTY VALUE SOURCE
test_source1 refreservation none default
@ -475,10 +472,10 @@ test_target1/test_source2/fs2/sub@test-20101111000000 refreservation -
self.skipTest("This zfs-userspace version doesnt support -o")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-mountpoint --debug".split(" ")).run())
r=shelltest("zfs get canmount -r test_source1 test_source2 test_target1")
r=shelltest("zfs get -r canmount test_source1 test_source2 test_target1")
self.assertMultiLineEqual(r,"""
NAME PROPERTY VALUE SOURCE
test_source1 canmount on default
@ -493,13 +490,13 @@ test_source2/fs2/sub@test-20101111000000 canmount - -
test_source2/fs3 canmount on default
test_source2/fs3/sub canmount on default
test_target1 canmount on default
test_target1/test_source1 canmount on default
test_target1/test_source1 canmount off local
test_target1/test_source1/fs1 canmount noauto local
test_target1/test_source1/fs1@test-20101111000000 canmount - -
test_target1/test_source1/fs1/sub canmount noauto local
test_target1/test_source1/fs1/sub@test-20101111000000 canmount - -
test_target1/test_source2 canmount on default
test_target1/test_source2/fs2 canmount on default
test_target1/test_source2 canmount off local
test_target1/test_source2/fs2 canmount off local
test_target1/test_source2/fs2/sub canmount noauto local
test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
""")
@ -508,18 +505,17 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
def test_rollback(self):
#initial backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
#make change
r=shelltest("zfs mount test_target1/test_source1/fs1")
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
#should fail (busy)
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
#rollback, should succeed
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --rollback".split(" ")).run())
@ -527,36 +523,35 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
def test_destroyincompat(self):
#initial backup
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
#add multiple compatible snapshot (written is still 0)
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible2")
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
#should be ok, is compatible
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
#add incompatible snapshot by changing and snapshotting
r=shelltest("zfs mount test_target1/test_source1/fs1")
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
r=shelltest("zfs snapshot test_target1/test_source1/fs1@incompatible1")
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
#--test should fail, now incompatible
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --test".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
#should fail, now incompatible
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000003"):
with mocktime("20101111000003"):
#--test should succeed by destroying incompatibles
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000003"):
with mocktime("20101111000003"):
#should succeed by destroying incompatibles
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible".split(" ")).run())
@ -594,13 +589,13 @@ test_target1/test_source2/fs2/sub@test-20101111000003
#test all ssh directions
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --exclude-received".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-target localhost --exclude-received".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
@ -645,7 +640,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
def test_minchange(self):
#initial
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
#make small change, use umount to reflect the changes immediately
@ -655,7 +650,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
#too small change, takes no snapshots
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
#make big change
@ -663,7 +658,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
#bigger change, should take snapshot
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -696,7 +691,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_test(self):
#initial
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -713,12 +708,12 @@ test_target1
""")
#actual make initial backup
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
#test incremental
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --allow-empty --verbose --test".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -754,7 +749,7 @@ test_target1/test_source2/fs2/sub@test-20101111000001
shelltest("zfs create test_target1/test_source1")
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -787,15 +782,15 @@ test_target1/test_source2/fs2/sub@test-20101111000000
def test_keep0(self):
"""test if keep-source=0 and keep-target=0 dont delete common snapshot and break backup"""
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0".split(" ")).run())
#make snapshot, shouldnt delete 0
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
#make snapshot 2, shouldnt delete 0 since it has holds, but will delete 1 since it has no holds
with patch('time.strftime', return_value="test-20101111000002"):
with mocktime("20101111000002"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
@ -827,7 +822,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
""")
#make another backup but with no-holds. we should naturally endup with only number 3
with patch('time.strftime', return_value="test-20101111000003"):
with mocktime("20101111000003"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0 --no-holds --allow-empty".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
@ -857,7 +852,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
# run with snapshot-only for 4, since we used no-holds, it will delete 3 on the source, breaking the backup
with patch('time.strftime', return_value="test-20101111000004"):
with mocktime("20101111000004"):
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)

View File

@ -10,10 +10,10 @@ class TestZfsAutobackup31(unittest2.TestCase):
def test_no_thinning(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --keep-target=0 --keep-source=0 --no-thinning".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
@ -54,10 +54,10 @@ test_target1/test_source2/fs2/sub@test-20101111000001
shelltest("zfs create test_target1/a")
shelltest("zfs create test_target1/b")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1/a --no-progress --verbose --debug".split(" ")).run())
with patch('time.strftime', return_value="test-20101111000001"):
with mocktime("20101111000001"):
self.assertFalse(ZfsAutobackup("test test_target1/b --no-progress --verbose".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t snapshot test_target1")
@ -75,7 +75,7 @@ test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
def test_zfs_compressed(self):
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --zfs-compressed".split(" ")).run())
@ -84,7 +84,7 @@ test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
shelltest("zfs set autobackup:test=true test_source1")
with patch('time.strftime', return_value="test-20101111000000"):
with mocktime("20101111000000"):
self.assertFalse(
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --force --strip-path=1".split(" ")).run())
@ -95,3 +95,25 @@ test_target1/fs1@test-20101111000000
test_target1/fs1/sub@test-20101111000000
test_target1/fs2/sub@test-20101111000000
""")
def test_exclude_unchanged(self):
shelltest("zfs snapshot -r test_source1@somesnapshot")
with mocktime("20101111000000"):
self.assertFalse(
ZfsAutobackup(
"test test_target1 --verbose --allow-empty --exclude-unchanged=1".split(" ")).run())
#everything should be excluded, but should not return an error (see #190)
with mocktime("20101111000001"):
self.assertFalse(
ZfsAutobackup(
"test test_target1 --verbose --allow-empty --exclude-unchanged=1".split(" ")).run())
r = shelltest("zfs list -H -o name -r -t snapshot test_target1")
self.assertMultiLineEqual(r, """
test_target1/test_source2/fs2/sub@test-20101111000000
""")

View File

@ -0,0 +1,200 @@
from basetest import *
class TestZfsAutobackup32(unittest2.TestCase):
"""various new 3.2 features"""
def setUp(self):
prepare_zpools()
self.longMessage=True
def test_invalid_common_snapshot(self):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
#create 2 snapshots with the same name, which are invalid as common snapshot
shelltest("zfs snapshot test_source1/fs1@invalid")
shelltest("zfs snapshot test_target1/test_source1/fs1@invalid")
with mocktime("20101111000001"):
#try the old way (without guid checking), and fail:
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --no-guid-check".split(" ")).run(),1)
#new way should be ok:
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertMultiLineEqual(r,"""
test_source1
test_source1/fs1
test_source1/fs1@test-20101111000000
test_source1/fs1@invalid
test_source1/fs1@test-20101111000001
test_source1/fs1/sub
test_source1/fs1/sub@test-20101111000000
test_source1/fs1/sub@test-20101111000001
test_source2
test_source2/fs2
test_source2/fs2/sub
test_source2/fs2/sub@test-20101111000000
test_source2/fs2/sub@test-20101111000001
test_source2/fs3
test_source2/fs3/sub
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20101111000000
test_target1/test_source1/fs1@invalid
test_target1/test_source1/fs1@test-20101111000001
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20101111000000
test_target1/test_source1/fs1/sub@test-20101111000001
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000000
test_target1/test_source2/fs2/sub@test-20101111000001
""")
def test_invalid_common_snapshot_with_data(self):
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
#create 2 snapshots with the same name, which are invalid as common snapshot
shelltest("zfs snapshot test_source1/fs1@invalid")
shelltest("touch /test_target1/test_source1/fs1/shouldnotbeHere")
shelltest("zfs snapshot test_target1/test_source1/fs1@invalid")
with mocktime("20101111000001"):
#try the old way and fail:
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --no-guid-check".split(" ")).run(),1)
#new way should be ok
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-incompatible".split(" ")).run())
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
self.assertMultiLineEqual(r,"""
test_source1
test_source1/fs1
test_source1/fs1@test-20101111000000
test_source1/fs1@invalid
test_source1/fs1@test-20101111000001
test_source1/fs1/sub
test_source1/fs1/sub@test-20101111000000
test_source1/fs1/sub@test-20101111000001
test_source2
test_source2/fs2
test_source2/fs2/sub
test_source2/fs2/sub@test-20101111000000
test_source2/fs2/sub@test-20101111000001
test_source2/fs3
test_source2/fs3/sub
test_target1
test_target1/test_source1
test_target1/test_source1/fs1
test_target1/test_source1/fs1@test-20101111000000
test_target1/test_source1/fs1@test-20101111000001
test_target1/test_source1/fs1/sub
test_target1/test_source1/fs1/sub@test-20101111000000
test_target1/test_source1/fs1/sub@test-20101111000001
test_target1/test_source2
test_target1/test_source2/fs2
test_target1/test_source2/fs2/sub
test_target1/test_source2/fs2/sub@test-20101111000000
test_target1/test_source2/fs2/sub@test-20101111000001
""")
#check consistent mounting behaviour, see issue #112
def test_mount_consitency_mounted(self):
"""only filesystems that have canmount=on with a mountpoint should be mounted. """
shelltest("zfs create -V 10M test_source1/fs1/subvol")
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
r=shelltest("zfs mount |grep -o /test_target1.*")
self.assertMultiLineEqual(r,"""
/test_target1
/test_target1/test_source1/fs1
/test_target1/test_source1/fs1/sub
/test_target1/test_source2/fs2/sub
""")
def test_mount_consitency_unmounted(self):
"""only test_target1 should be mounted in this test"""
shelltest("zfs create -V 10M test_source1/fs1/subvol")
with mocktime("20101111000000"):
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --clear-mountpoint".split(" ")).run())
r=shelltest("zfs mount |grep -o /test_target1.*")
self.assertMultiLineEqual(r,"""
/test_target1
""")
def test_transfer_thinning(self):
# test pre/post/during transfer thinning and efficient transfer (no transerring of stuff that gets deleted on target)
#less output
shelltest("zfs set autobackup:test2=true test_source1/fs1/sub")
# nobody wants this one, will be destroyed before transferring (over a year ago)
with mocktime("20000101000000"):
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
# only target wants this one (monthlys)
with mocktime("20010101000000"):
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
# both want this one (dayly + monthly)
# other snapshots should influence the middle one that we actually want.
with mocktime("20010201000000"):
shelltest("zfs snapshot test_source1/fs1/sub@other1")
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
shelltest("zfs snapshot test_source1/fs1/sub@other2")
# only source wants this one (dayly)
with mocktime("20010202000000"):
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
#will become common snapshot
with OutputIO() as buf:
with redirect_stdout(buf):
with mocktime("20010203000000"):
self.assertFalse(ZfsAutobackup("--keep-source=1d10d --keep-target=1m10m --allow-empty --verbose --clear-mountpoint --other-snapshots test2 test_target1".split(" ")).run())
print(buf.getvalue())
self.assertIn(
"""
[Source] test_source1/fs1/sub@test2-20000101000000: Destroying
[Source] test_source1/fs1/sub@test2-20010101000000: -> test_target1/test_source1/fs1/sub (new)
[Source] test_source1/fs1/sub@other1: -> test_target1/test_source1/fs1/sub
[Source] test_source1/fs1/sub@test2-20010101000000: Destroying
[Source] test_source1/fs1/sub@test2-20010201000000: -> test_target1/test_source1/fs1/sub
[Source] test_source1/fs1/sub@other2: -> test_target1/test_source1/fs1/sub
[Source] test_source1/fs1/sub@test2-20010203000000: -> test_target1/test_source1/fs1/sub
""", buf.getvalue())
r=shelltest("zfs list -H -o name -r -t snapshot test_source1 test_target1")
self.assertMultiLineEqual(r,"""
test_source1/fs1/sub@other1
test_source1/fs1/sub@test2-20010201000000
test_source1/fs1/sub@other2
test_source1/fs1/sub@test2-20010202000000
test_source1/fs1/sub@test2-20010203000000
test_target1/test_source1/fs1/sub@test2-20010101000000
test_target1/test_source1/fs1/sub@other1
test_target1/test_source1/fs1/sub@test2-20010201000000
test_target1/test_source1/fs1/sub@other2
test_target1/test_source1/fs1/sub@test2-20010203000000
""")

View File

@ -1,3 +1,5 @@
from os.path import exists
from basetest import *
from zfs_autobackup.BlockHasher import BlockHasher
@ -9,6 +11,10 @@ class TestZfsCheck(unittest2.TestCase):
def test_volume(self):
if exists("/.dockerenv"):
self.skipTest("FIXME: zfscheck volumes not supported in docker yet")
prepare_zpools()
shelltest("zfs create -V200M test_source1/vol")
@ -50,7 +56,7 @@ class TestZfsCheck(unittest2.TestCase):
shelltest("mkfifo /test_source1/f")
shelltest("zfs snapshot test_source1@test")
ZfsCheck("test_source1@test --debug".split(" "), print_arguments=False).run()
with self.subTest("Generate"):
with OutputIO() as buf:
with redirect_stdout(buf):
@ -178,15 +184,16 @@ whole_whole2_partial 0 309ffffba2e1977d12f3b7469971f30d28b94bd8
shelltest("cp tests/data/whole /test_source1/testfile")
shelltest("zfs snapshot test_source1@test")
#breaks pipe when grep exists:
#breaks pipe when head exists
#important to use --debug, since that generates extra output which would be problematic if we didnt do correct SIGPIPE handling
shelltest("python -m zfs_autobackup.ZfsCheck test_source1@test --debug | grep -m1 'Hashing tree'")
# time.sleep(5)
shelltest("python -m zfs_autobackup.ZfsCheck test_source1@test --debug | head -n1")
#should NOT be mounted anymore if cleanup went ok:
self.assertNotRegex(shelltest("mount"), "test_source1@test")
def test_brokenpipe_cleanup_volume(self):
if exists("/.dockerenv"):
self.skipTest("FIXME: zfscheck volumes not supported in docker yet")
prepare_zpools()
shelltest("zfs create -V200M test_source1/vol")
@ -194,7 +201,7 @@ whole_whole2_partial 0 309ffffba2e1977d12f3b7469971f30d28b94bd8
#breaks pipe when grep exists:
#important to use --debug, since that generates extra output which would be problematic if we didnt do correct SIGPIPE handling
shelltest("python -m zfs_autobackup.ZfsCheck test_source1/vol@test --debug | grep -m1 'Hashing file'")
shelltest("python -m zfs_autobackup.ZfsCheck test_source1/vol@test --debug| grep -m1 'Hashing file'")
# time.sleep(1)
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)

View File

@ -15,7 +15,9 @@ class TestZfsNode(unittest2.TestCase):
node = ZfsNode(utc=False, snapshot_time_format="test-%Y%m%d%H%M%S", hold_name="zfs_autobackup:test", logger=logger, description=description)
with self.subTest("first snapshot"):
node.consistent_snapshot(node.selected_datasets(property_name="autobackup:test",exclude_paths=[], exclude_received=False, exclude_unchanged=False, min_change=200000), "test-20101111000001", 100000)
(selected_datasets, excluded_datasets)=node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False,
exclude_unchanged=0)
node.consistent_snapshot(selected_datasets, "test-20101111000001", 100000)
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r, """
test_source1
@ -33,7 +35,9 @@ test_target1
""")
with self.subTest("second snapshot, no changes, no snapshot"):
node.consistent_snapshot(node.selected_datasets(property_name="autobackup:test",exclude_paths=[], exclude_received=False, exclude_unchanged=False, min_change=200000), "test-20101111000002", 1)
(selected_datasets, excluded_datasets)=node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False,
exclude_unchanged=0)
node.consistent_snapshot(selected_datasets, "test-20101111000002", 1)
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r, """
test_source1
@ -51,7 +55,8 @@ test_target1
""")
with self.subTest("second snapshot, no changes, empty snapshot"):
node.consistent_snapshot(node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=False, min_change=200000), "test-20101111000002", 0)
(selected_datasets, excluded_datasets) =node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=0)
node.consistent_snapshot(selected_datasets, "test-20101111000002", 0)
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
self.assertEqual(r, """
test_source1
@ -79,7 +84,8 @@ test_target1
with self.subTest("Test if all cmds are executed correctly (no failures)"):
with OutputIO() as buf:
with redirect_stdout(buf):
node.consistent_snapshot(node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=False, min_change=1), "test-1",
(selected_datasets, excluded_datasets) =node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=0)
node.consistent_snapshot(selected_datasets, "test-1",
0,
pre_snapshot_cmds=["echo pre1", "echo pre2"],
post_snapshot_cmds=["echo post1 >&2", "echo post2 >&2"]
@ -95,7 +101,8 @@ test_target1
with OutputIO() as buf:
with redirect_stdout(buf):
with self.assertRaises(ExecuteError):
node.consistent_snapshot(node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=False, min_change=1), "test-1",
(selected_datasets, excluded_datasets) =node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=0)
node.consistent_snapshot(selected_datasets, "test-1",
0,
pre_snapshot_cmds=["echo pre1", "false", "echo pre2"],
post_snapshot_cmds=["echo post1", "false", "echo post2"]
@ -112,7 +119,8 @@ test_target1
with redirect_stdout(buf):
with self.assertRaises(ExecuteError):
#same snapshot name as before so it fails
node.consistent_snapshot(node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=False, min_change=1), "test-1",
(selected_datasets, excluded_datasets) =node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=0)
node.consistent_snapshot(selected_datasets, "test-1",
0,
pre_snapshot_cmds=["echo pre1", "echo pre2"],
post_snapshot_cmds=["echo post1", "echo post2"]
@ -158,7 +166,9 @@ test_target1
logger = LogStub()
description = "[Source]"
node = ZfsNode(utc=False, snapshot_time_format="test-%Y%m%d%H%M%S", hold_name="zfs_autobackup:test", logger=logger, description=description)
s = pformat(node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False, exclude_unchanged=True, min_change=1))
(selected_datasets, excluded_datasets)=node.selected_datasets(property_name="autobackup:test", exclude_paths=[], exclude_received=False,
exclude_unchanged=1)
s = pformat(selected_datasets)
print(s)
# basics

42
tests/tests_docker Executable file
View File

@ -0,0 +1,42 @@
#!/bin/sh
#NOTE: This script will started inside the test docker container
set -e
if ! [ -e /.dockerenv ]; then
echo "only run this script inside a docker container!"
exit 1
fi
if ! [ -e /dev/ram0 ]; then
echo "Please load this module outside container:" >&2
echo "sudo modprobe brd rd_size=512000" >&2
exit 1
fi
#start sshd and other stuff
ssh-keygen -A
/usr/sbin/sshd
udevd -d
#config ssh
if ! [ -e /root/.ssh/id_rsa ]; then
ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ''
fi
cat >> ~/.ssh/config <<EOF
Host *
addkeystoagent yes
controlpath ~/.ssh/control-master-%r@%h:%p
controlmaster auto
controlpersist 3600
EOF
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
ssh -oStrictHostKeyChecking=no localhost 'echo SSH OK'
cd /app
python -m unittest discover /app/tests -vvvvf $@

View File

@ -10,7 +10,7 @@ class CliBase(object):
Overridden in subclasses that add stuff for the specific programs."""
# also used by setup.py
VERSION = "3.2-beta1"
VERSION = "3.3-beta.2"
HEADER = "{} v{} - (c)2022 E.H.Eefting (edwin@datux.nl)".format(os.path.basename(sys.argv[0]), VERSION)
def __init__(self, argv, print_arguments=True):

View File

@ -36,7 +36,7 @@ class LogConsole:
def warning(self, txt):
self.clear_progress()
if self.colorama:
print(colorama.Fore.YELLOW + colorama.Style.BRIGHT + " NOTE: " + txt + colorama.Style.RESET_ALL)
print(colorama.Fore.YELLOW + colorama.Style.NORMAL + " NOTE: " + txt + colorama.Style.RESET_ALL)
else:
print(" NOTE: " + txt)
sys.stdout.flush()

View File

@ -1,4 +1,3 @@
import time
from .ThinnerRule import ThinnerRule
@ -37,7 +36,7 @@ class Thinner:
return ret
def thin(self, objects, keep_objects=None, now=None):
def thin(self, objects, keep_objects, now):
"""thin list of objects with current schedule rules. objects: list of
objects to thin. every object should have timestamp attribute.
@ -49,8 +48,6 @@ class Thinner:
now: if specified, use this time as current time
"""
if not keep_objects:
keep_objects = []
# always keep a number of the last objets?
if self.always_keep:
@ -68,9 +65,6 @@ class Thinner:
for rule in self.rules:
time_blocks[rule.period] = {}
if not now:
now = int(time.time())
keeps = []
removes = []

View File

@ -2,6 +2,7 @@ import argparse
import sys
from .CliBase import CliBase
from .util import datetime_now
class ZfsAuto(CliBase):
@ -46,8 +47,8 @@ class ZfsAuto(CliBase):
self.verbose("NOTE: Source and target are on the same host, excluding target-path from selection.")
self.exclude_paths.append(args.target_path)
else:
if not args.exclude_received:
self.verbose("NOTE: Source and target are on the same host, adding --exclude-received to commandline.")
if not args.exclude_received and not args.include_received:
self.verbose("NOTE: Source and target are on the same host, adding --exclude-received to commandline. (use --include-received to overrule)")
args.exclude_received = True
if args.test:
@ -58,7 +59,11 @@ class ZfsAuto(CliBase):
self.snapshot_time_format = args.snapshot_format.format(args.backup_name)
self.hold_name = args.hold_format.format(args.backup_name)
dt = datetime_now(args.utc)
self.verbose("")
self.verbose("Current time {} : {}".format(args.utc and "UTC" or " ", dt.strftime("%Y-%m-%d %H:%M:%S")))
self.verbose("Selecting dataset property : {}".format(self.property_name))
self.verbose("Snapshot format : {}".format(self.snapshot_time_format))
self.verbose("Timezone : {}".format("UTC" if args.utc else "Local"))
@ -98,11 +103,14 @@ class ZfsAuto(CliBase):
group=parser.add_argument_group("Selection options")
group.add_argument('--ignore-replicated', action='store_true', help=argparse.SUPPRESS)
group.add_argument('--exclude-unchanged', action='store_true',
help='Exclude datasets that have no changes since any last snapshot. (Useful in combination with proxmox HA replication)')
group.add_argument('--exclude-unchanged', metavar='BYTES', default=0, type=int,
help='Exclude datasets that have less than BYTES data changed since any last snapshot. (Use with proxmox HA replication)')
group.add_argument('--exclude-received', action='store_true',
help='Exclude datasets that have the origin of their autobackup: property as "received". '
'This can avoid recursive replication between two backup partners.')
group.add_argument('--include-received', action='store_true',
help=argparse.SUPPRESS)
return parser

View File

@ -1,9 +1,7 @@
import time
import argparse
from datetime import datetime
from signal import signal, SIGPIPE
from .util import output_redir, sigpipe_handler
from .util import output_redir, sigpipe_handler, datetime_now
from .ZfsAuto import ZfsAuto
@ -33,15 +31,15 @@ class ZfsAutobackup(ZfsAuto):
if args.allow_empty:
args.min_change = 0
if args.destroy_incompatible:
args.rollback = True
# if args.destroy_incompatible:
# args.rollback = True
if args.resume:
self.warning("The --resume option isn't needed anymore (its autodetected now)")
self.warning("The --resume option isn't needed anymore (it's autodetected now)")
if args.raw:
self.warning(
"The --raw option isn't needed anymore (its autodetected now). Also see --encrypt and --decrypt.")
"The --raw option isn't needed anymore (it's autodetected now). Also see --encrypt and --decrypt.")
if args.compress and args.ssh_source is None and args.ssh_target is None:
self.warning("Using compression, but transfer is local.")
@ -67,20 +65,22 @@ class ZfsAutobackup(ZfsAuto):
help='Only create snapshot if enough bytes are changed. (default %('
'default)s)')
group.add_argument('--allow-empty', action='store_true',
help='If nothing has changed, still create empty snapshots. (Faster. Same as --min-change=0)')
help='If nothing has changed, still create empty snapshots. (Same as --min-change=0)')
group.add_argument('--other-snapshots', action='store_true',
help='Send over other snapshots as well, not just the ones created by this tool.')
group.add_argument('--set-snapshot-properties', metavar='PROPERTY=VALUE,...', type=str,
help='List of properties to set on the snapshot.')
group.add_argument('--no-guid-check', action='store_true',
help='Dont check guid of common snapshots. (faster)')
group = parser.add_argument_group("Transfer options")
group.add_argument('--no-send', action='store_true',
help='Don\'t transfer snapshots (useful for cleanups, or if you want a serperate send-cronjob)')
help='Don\'t transfer snapshots (useful for cleanups, or if you want a separate send-cronjob)')
group.add_argument('--no-holds', action='store_true',
help='Don\'t hold snapshots. (Faster. Allows you to destroy common snapshot.)')
group.add_argument('--clear-refreservation', action='store_true',
help='Filter "refreservation" property. (recommended, safes space. same as '
help='Filter "refreservation" property. (recommended, saves space. same as '
'--filter-properties refreservation)')
group.add_argument('--clear-mountpoint', action='store_true',
help='Set property canmount=noauto for new datasets. (recommended, prevents mount '
@ -95,9 +95,9 @@ class ZfsAutobackup(ZfsAuto):
help='Rollback changes to the latest target snapshot before starting. (normally you can '
'prevent changes by setting the readonly property on the target_path to on)')
group.add_argument('--force', '-F', action='store_true',
help='Use zfs -F option to force overwrite/rollback. (Usefull with --strip-path=1, but use with care)')
help='Use zfs -F option to force overwrite/rollback. (Useful with --strip-path=1, but use with care)')
group.add_argument('--destroy-incompatible', action='store_true',
help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
help='Destroy incompatible snapshots on target. Use with care! (also does rollback of dataset)')
group.add_argument('--ignore-transfer-errors', action='store_true',
help='Ignore transfer errors (still checks if received filesystem exists. useful for '
'acltype errors)')
@ -110,15 +110,17 @@ class ZfsAutobackup(ZfsAuto):
group.add_argument('--zfs-compressed', action='store_true',
help='Transfer blocks that already have zfs-compression as-is.')
group = parser.add_argument_group("ZFS send/recv pipes")
group = parser.add_argument_group("Data transfer options")
group.add_argument('--compress', metavar='TYPE', default=None, nargs='?', const='zstd-fast',
choices=compressors.choices(),
help='Use compression during transfer, defaults to zstd-fast if TYPE is not specified. ({})'.format(
", ".join(compressors.choices())))
group.add_argument('--rate', metavar='DATARATE', default=None,
help='Limit data transfer rate (e.g. 128K. requires mbuffer.)')
help='Limit data transfer rate in Bytes/sec (e.g. 128K. requires mbuffer.)')
group.add_argument('--buffer', metavar='SIZE', default=None,
help='Add zfs send and recv buffers to smooth out IO bursts. (e.g. 128M. requires mbuffer)')
parser.add_argument('--buffer-chunk-size', metavar="BUFFERCHUNKSIZE", default=None,
help='Tune chunk size when mbuffer is used. (requires mbuffer.)')
group.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
help='pipe zfs send output through COMMAND (can be used multiple times)')
group.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
@ -142,7 +144,10 @@ class ZfsAutobackup(ZfsAuto):
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
def thin_missing_targets(self, target_dataset, used_target_datasets):
"""thin target datasets that are missing on the source."""
"""thin target datasets that are missing on the source.
:type used_target_datasets: list[ZfsDataset]
:type target_dataset: ZfsDataset
"""
self.debug("Thinning obsolete datasets")
missing_datasets = [dataset for dataset in target_dataset.recursive_datasets if
@ -150,6 +155,7 @@ class ZfsAutobackup(ZfsAuto):
count = 0
for dataset in missing_datasets:
self.debug("analyse missing {}".format(dataset))
count = count + 1
if self.args.progress:
@ -167,7 +173,11 @@ class ZfsAutobackup(ZfsAuto):
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
def destroy_missing_targets(self, target_dataset, used_target_datasets):
"""destroy target datasets that are missing on the source and that meet the requirements"""
"""destroy target datasets that are missing on the source and that meet the requirements
:type used_target_datasets: list[ZfsDataset]
:type target_dataset: ZfsDataset
"""
self.debug("Destroying obsolete datasets")
@ -189,11 +199,11 @@ class ZfsAutobackup(ZfsAuto):
dataset.debug("Destroy missing: ignoring")
else:
dataset.verbose(
"Destroy missing: has no snapshots made by us. (please destroy manually)")
"Destroy missing: has no snapshots made by us (please destroy manually).")
else:
# past the deadline?
deadline_ttl = ThinnerRule("0s" + self.args.destroy_missing).ttl
now = int(time.time())
now = datetime_now(self.args.utc).timestamp()
if dataset.our_snapshots[-1].timestamp + deadline_ttl > now:
dataset.verbose("Destroy missing: Waiting for deadline.")
else:
@ -234,11 +244,22 @@ class ZfsAutobackup(ZfsAuto):
"""determine the zfs send pipe"""
ret = []
_mbuffer = False
_buffer = "16M"
_cs = "128k"
_rate = False
# IO buffer
if self.args.buffer:
logger("zfs send buffer : {}".format(self.args.buffer))
ret.extend([ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m" + self.args.buffer])
_mbuffer = True
_buffer = self.args.buffer
# IO chunk size
if self.args.buffer_chunk_size:
logger("zfs send chunk size : {}".format(self.args.buffer_chunk_size))
_mbuffer = True
_cs = self.args.buffer_chunk_size
# custom pipes
for send_pipe in self.args.send_pipe:
@ -256,7 +277,14 @@ class ZfsAutobackup(ZfsAuto):
# transfer rate
if self.args.rate:
logger("zfs send transfer rate : {}".format(self.args.rate))
ret.extend([ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m16M", "-R" + self.args.rate])
_mbuffer = True
_rate = self.args.rate
if _mbuffer:
cmd = [ExecuteNode.PIPE, "mbuffer", "-q", "-s{}".format(_cs), "-m{}".format(_buffer)]
if _rate:
cmd.append("-R{}".format(self.args.rate))
ret.extend(cmd)
return ret
@ -278,11 +306,19 @@ class ZfsAutobackup(ZfsAuto):
logger("zfs recv custom pipe : {}".format(recv_pipe))
# IO buffer
if self.args.buffer:
if self.args.buffer or self.args.buffer_chunk_size:
_cs = "128k"
_buffer = "16M"
# only add second buffer if its usefull. (e.g. non local transfer or other pipes active)
if self.args.ssh_source != None or self.args.ssh_target != None or self.args.recv_pipe or self.args.send_pipe or self.args.compress != None:
logger("zfs recv buffer : {}".format(self.args.buffer))
ret.extend(["mbuffer", "-q", "-s128k", "-m" + self.args.buffer, ExecuteNode.PIPE])
if self.args.buffer_chunk_size:
_cs = self.args.buffer_chunk_size
if self.args.buffer:
_buffer = self.args.buffer
ret.extend(["mbuffer", "-q", "-s{}".format(_cs), "-m{}".format(_buffer), ExecuteNode.PIPE])
return ret
@ -342,6 +378,7 @@ class ZfsAutobackup(ZfsAuto):
and target_dataset.parent \
and target_dataset.parent not in target_datasets \
and not target_dataset.parent.exists:
target_dataset.debug("Creating unmountable parents")
target_dataset.parent.create_filesystem(parents=True)
# determine common zpool features (cached, so no problem we call it often)
@ -360,10 +397,8 @@ class ZfsAutobackup(ZfsAuto):
destroy_incompatible=self.args.destroy_incompatible,
send_pipes=send_pipes, recv_pipes=recv_pipes,
decrypt=self.args.decrypt, encrypt=self.args.encrypt,
zfs_compressed=self.args.zfs_compressed, force=self.args.force)
zfs_compressed=self.args.zfs_compressed, force=self.args.force, guid_check=not self.args.no_guid_check)
except Exception as e:
# if self.args.progress:
# self.clear_progress()
fail_count = fail_count + 1
source_dataset.error("FAILED: " + str(e))
@ -371,8 +406,6 @@ class ZfsAutobackup(ZfsAuto):
self.verbose("Debug mode, aborting on first error")
raise
# if self.args.progress:
# self.clear_progress()
target_path_dataset = target_node.get_dataset(self.args.target_path)
if not self.args.no_thinning:
@ -443,20 +476,18 @@ class ZfsAutobackup(ZfsAuto):
################# select source datasets
self.set_title("Selecting")
source_datasets = source_node.selected_datasets(property_name=self.property_name,
( source_datasets, excluded_datasets) = source_node.selected_datasets(property_name=self.property_name,
exclude_received=self.args.exclude_received,
exclude_paths=self.exclude_paths,
exclude_unchanged=self.args.exclude_unchanged,
min_change=self.args.min_change)
if not source_datasets:
exclude_unchanged=self.args.exclude_unchanged)
if not source_datasets and not excluded_datasets:
self.print_error_sources()
return 255
################# snapshotting
if not self.args.no_snapshot:
self.set_title("Snapshotting")
dt = datetime.utcnow() if self.args.utc else datetime.now()
snapshot_name = dt.strftime(self.snapshot_time_format)
snapshot_name = datetime_now(self.args.utc).strftime(self.snapshot_time_format)
source_node.consistent_snapshot(source_datasets, snapshot_name,
min_changed_bytes=self.args.min_change,
pre_snapshot_cmds=self.args.pre_snapshot_cmd,

View File

@ -86,8 +86,8 @@ def verify_filesystem(source_snapshot, source_mnt, target_snapshot, target_mnt,
raise(Exception("program errror, unknown method"))
finally:
source_snapshot.unmount()
target_snapshot.unmount()
source_snapshot.unmount(source_mnt)
target_snapshot.unmount(target_mnt)
# def hash_dev(node, dev):
@ -186,7 +186,7 @@ class ZfsAutoverify(ZfsAuto):
target_dataset = target_node.get_dataset(target_name)
# find common snapshots to verify
source_snapshot = source_dataset.find_common_snapshot(target_dataset)
source_snapshot = source_dataset.find_common_snapshot(target_dataset, True)
target_snapshot = target_dataset.find_snapshot(source_snapshot)
if source_snapshot is None or target_snapshot is None:
@ -239,12 +239,11 @@ class ZfsAutoverify(ZfsAuto):
################# select source datasets
self.set_title("Selecting")
source_datasets = source_node.selected_datasets(property_name=self.property_name,
( source_datasets, excluded_datasets) = source_node.selected_datasets(property_name=self.property_name,
exclude_received=self.args.exclude_received,
exclude_paths=self.exclude_paths,
exclude_unchanged=self.args.exclude_unchanged,
min_change=0)
if not source_datasets:
exclude_unchanged=self.args.exclude_unchanged)
if not source_datasets and not excluded_datasets:
self.print_error_sources()
return 255
@ -307,6 +306,7 @@ class ZfsAutoverify(ZfsAuto):
def cli():
import sys
raise(Exception("This program is incomplete, dont use it yet."))
signal(SIGPIPE, sigpipe_handler)
failed = ZfsAutoverify(sys.argv[1:], False).run()
sys.exit(min(failed,255))

View File

@ -74,7 +74,7 @@ class ZfsCheck(CliBase):
def cleanup_zfs_filesystem(self, snapshot):
mnt = "/tmp/" + tmp_name()
snapshot.unmount()
snapshot.unmount(mnt)
self.debug("Cleaning up temporary mount point")
self.node.run(["rmdir", mnt], hide_errors=True, valid_exitcodes=[])
@ -106,7 +106,7 @@ class ZfsCheck(CliBase):
time.sleep(1)
raise (Exception("Timeout while waiting for /dev entry to appear. (looking in: {})".format(locations)))
raise (Exception("Timeout while waiting for /dev entry to appear. (looking in: {}). Hint: did you forget to load the encryption key?".format(locations)))
def cleanup_zfs_volume(self, snapshot):
"""destroys temporary volume snapshot"""

View File

@ -58,6 +58,13 @@ class ZfsDataset:
"""
self.zfs_node.error("{}: {}".format(self.name, txt))
def warning(self, txt):
"""
Args:
:type txt: str
"""
self.zfs_node.warning("{}: {}".format(self.name, txt))
def debug(self, txt):
"""
Args:
@ -81,8 +88,8 @@ class ZfsDataset:
Args:
:type count: int
"""
components=self.split_path()
if count>len(components):
components = self.split_path()
if count > len(components):
raise Exception("Trying to strip too much from path ({} items from {})".format(count, self.name))
return "/".join(components[count:])
@ -118,22 +125,25 @@ class ZfsDataset:
"""true if this dataset is a snapshot"""
return self.name.find("@") != -1
def is_selected(self, value, source, inherited, exclude_received, exclude_paths, exclude_unchanged, min_change):
def is_selected(self, value, source, inherited, exclude_received, exclude_paths, exclude_unchanged):
"""determine if dataset should be selected for backup (called from
ZfsNode)
Args:
:type exclude_paths: list of str
:type exclude_paths: list[str]
:type value: str
:type source: str
:type inherited: bool
:type exclude_received: bool
:type exclude_unchanged: bool
:type min_change: bool
:type exclude_unchanged: int
:param value: Value of the zfs property ("false"/"true"/"child"/parent/"-")
:param source: Source of the zfs property ("local"/"received", "-")
:param inherited: True of the value/source was inherited from a higher dataset.
Returns: True : Selected
False: Excluded
None: No property found
"""
# sanity checks
@ -149,7 +159,7 @@ class ZfsDataset:
# non specified, ignore
if value == "-":
return False
return None
# only select childs of this dataset, ignore
if value == "child" and not inherited:
@ -179,15 +189,14 @@ class ZfsDataset:
self.verbose("Excluded (dataset already received)")
return False
if exclude_unchanged and not self.is_changed(min_change):
self.verbose("Excluded (unchanged since last snapshot)")
if not self.is_changed(exclude_unchanged):
self.verbose("Excluded (by --exclude-unchanged)")
return False
self.verbose("Selected")
return True
@CachedProperty
@property
def parent(self):
"""get zfs-parent of this dataset. for snapshots this means it will get
the filesystem/volume that it belongs to. otherwise it will return the
@ -196,11 +205,12 @@ class ZfsDataset:
we cache this so everything in the parent that is cached also stays.
returns None if there is no parent.
:rtype: ZfsDataset | None
"""
if self.is_snapshot:
return self.zfs_node.get_dataset(self.filesystem_name)
else:
stripped=self.rstrip_path(1)
stripped = self.rstrip_path(1)
if stripped:
return self.zfs_node.get_dataset(stripped)
else:
@ -247,32 +257,46 @@ class ZfsDataset:
return None
@CachedProperty
def exists_check(self):
"""check on disk if it exists"""
self.debug("Checking if dataset exists")
return (len(self.zfs_node.run(tab_split=True, cmd=["zfs", "list", self.name], readonly=True,
valid_exitcodes=[0, 1],
hide_errors=True)) > 0)
@property
def exists(self):
"""check if dataset exists. Use force to force a specific value to be
cached, if you already know. Useful for performance reasons
"""returns True if dataset should exist.
Use force_exists to force a specific value, if you already know. Useful for performance and test reasons
"""
if self.force_exists is not None:
self.debug("Checking if filesystem exists: was forced to {}".format(self.force_exists))
if self.force_exists:
self.debug("Dataset should exist")
else:
self.debug("Dataset should not exist")
return self.force_exists
else:
self.debug("Checking if filesystem exists")
return self.exists_check
return (self.zfs_node.run(tab_split=True, cmd=["zfs", "list", self.name], readonly=True, valid_exitcodes=[0, 1],
hide_errors=True) and True)
def create_filesystem(self, parents=False):
def create_filesystem(self, parents=False, unmountable=True):
"""create a filesystem
Args:
:type parents: bool
"""
if parents:
self.verbose("Creating filesystem and parents")
self.zfs_node.run(["zfs", "create", "-p", self.name])
else:
self.verbose("Creating filesystem")
self.zfs_node.run(["zfs", "create", self.name])
# recurse up
if parents and self.parent and not self.parent.exists:
self.parent.create_filesystem(parents, unmountable)
cmd = ["zfs", "create"]
if unmountable:
cmd.extend(["-o", "canmount=off"])
cmd.append(self.name)
self.zfs_node.run(cmd)
self.force_exists = True
@ -315,9 +339,6 @@ class ZfsDataset:
"zfs", "get", "-H", "-o", "property,value", "-p", "all", self.name
]
if not self.exists:
return {}
self.debug("Getting zfs properties")
ret = {}
@ -338,7 +359,6 @@ class ZfsDataset:
if min_changed_bytes == 0:
return True
if int(self.properties['written']) < min_changed_bytes:
return False
else:
@ -355,7 +375,7 @@ class ZfsDataset:
@property
def holds(self):
"""get list of holds for dataset"""
"""get list[holds] for dataset"""
output = self.zfs_node.run(["zfs", "holds", "-H", self.name], valid_exitcodes=[0], tab_split=True,
readonly=True)
@ -398,15 +418,15 @@ class ZfsDataset:
seconds = time.mktime(dt.timetuple())
return seconds
def from_names(self, names):
"""convert a list of names to a list ZfsDatasets for this zfs_node
def from_names(self, names, force_exists=None):
"""convert a list[names] to a list ZfsDatasets for this zfs_node
Args:
:type names: list of str
:type names: list[str]
"""
ret = []
for name in names:
ret.append(self.zfs_node.get_dataset(name))
ret.append(self.zfs_node.get_dataset(name, force_exists))
return ret
@ -425,8 +445,11 @@ class ZfsDataset:
@CachedProperty
def snapshots(self):
"""get all snapshots of this dataset"""
"""get all snapshots of this dataset
:rtype: ZfsDataset
"""
#FIXME: dont check for existance. (currenlty needed for _add_virtual_snapshots)
if not self.exists:
return []
@ -436,11 +459,11 @@ class ZfsDataset:
"zfs", "list", "-d", "1", "-r", "-t", "snapshot", "-H", "-o", "name", self.name
]
return self.from_names(self.zfs_node.run(cmd=cmd, readonly=True))
return self.from_names(self.zfs_node.run(cmd=cmd, readonly=True), force_exists=True)
@property
def our_snapshots(self):
"""get list of snapshots creates by us of this dataset"""
"""get list[snapshots] creates by us of this dataset"""
ret = []
for snapshot in self.snapshots:
if snapshot.is_ours():
@ -535,7 +558,7 @@ class ZfsDataset:
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", self.name
])
return self.from_names(names[1:])
return self.from_names(names[1:], force_exists=True)
@CachedProperty
def datasets(self, types="filesystem,volume"):
@ -551,9 +574,10 @@ class ZfsDataset:
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", "-d", "1", self.name
])
return self.from_names(names[1:])
return self.from_names(names[1:], force_exists=True)
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes, zfs_compressed):
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded,
send_pipes, zfs_compressed):
"""returns a pipe with zfs send output for this snapshot
resume_token: resume sending from this token. (in that case we don't
@ -561,8 +585,8 @@ class ZfsDataset:
Args:
:param send_pipes: output cmd array that will be added to actual zfs send command. (e.g. mbuffer or compression program)
:type send_pipes: list of str
:type features: list of str
:type send_pipes: list[str]
:type features: list[str]
:type prev_snapshot: ZfsDataset
:type resume_token: str
:type show_progress: bool
@ -575,13 +599,14 @@ class ZfsDataset:
# all kind of performance options:
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
cmd.append("--large-block") # large block support (only if recordsize>128k which is seldomly used)
# large block support (only if recordsize>128k which is seldomly used)
cmd.append("-L") # --large-block
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
cmd.append("--embed") # WRITE_EMBEDDED, more compact stream
cmd.append("-e") # --embed; WRITE_EMBEDDED, more compact stream
if zfs_compressed and "-c" in self.zfs_node.supported_send_options:
cmd.append("--compressed") # use compressed WRITE records
cmd.append("-c") # --compressed; use compressed WRITE records
# raw? (send over encrypted data in its original encrypted form without decrypting)
if raw:
@ -589,8 +614,8 @@ class ZfsDataset:
# progress output
if show_progress:
cmd.append("--verbose")
cmd.append("--parsable")
cmd.append("-v") # --verbose
cmd.append("-P") # --parsable
# resume a previous send? (don't need more parameters in that case)
if resume_token:
@ -599,7 +624,7 @@ class ZfsDataset:
else:
# send properties
if send_properties:
cmd.append("--props")
cmd.append("-p") # --props
# incremental?
if prev_snapshot:
@ -613,7 +638,8 @@ class ZfsDataset:
return output_pipe
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False, force=False):
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False,
force=False):
"""starts a zfs recv for this snapshot and uses pipe as input
note: you can it both on a snapshot or filesystem object. The
@ -623,9 +649,9 @@ class ZfsDataset:
Args:
:param recv_pipes: input cmd array that will be prepended to actual zfs recv command. (e.g. mbuffer or decompression program)
:type pipe: subprocess.pOpen
:type features: list of str
:type filter_properties: list of str
:type set_properties: list of str
:type features: list[str]
:type filter_properties: list[str]
:type set_properties: list[str]
:type ignore_exit_code: bool
"""
@ -642,7 +668,7 @@ class ZfsDataset:
cmd.extend(["zfs", "recv"])
# don't mount filesystem that is received
# don't let zfs recv mount everything thats received (even with canmount=noauto!)
cmd.append("-u")
for property_ in filter_properties:
@ -672,7 +698,7 @@ class ZfsDataset:
# self.zfs_node.reset_progress()
self.zfs_node.run(cmd, inp=pipe, valid_exitcodes=valid_exitcodes)
# invalidate cache, but we at least know we exist now
# invalidate cache
self.invalidate()
# in test mode we assume everything was ok and it exists
@ -685,6 +711,34 @@ class ZfsDataset:
self.error("error during transfer")
raise (Exception("Target doesn't exist after transfer, something went wrong."))
# at this point we're sure the actual dataset exists
self.parent.force_exists = True
def automount(self):
"""Mount the dataset as if one did a zfs mount -a, but only for this dataset
Failure to mount doesnt result in an exception, but outputs errors to STDERR.
"""
self.debug("Auto mounting")
if self.properties['type'] != "filesystem":
return
if self.properties['canmount'] != 'on':
return
if self.properties['mountpoint'] == 'legacy':
return
if self.properties['mountpoint'] == 'none':
return
if self.properties['encryption'] != 'off' and self.properties['keystatus'] == 'unavailable':
return
self.zfs_node.run(["zfs", "mount", self.name], valid_exitcodes=[0,1])
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
filter_properties, set_properties, ignore_recv_exit_code, resume_token,
raw, send_properties, write_embedded, send_pipes, recv_pipes, zfs_compressed, force):
@ -694,14 +748,14 @@ class ZfsDataset:
connects a send_pipe() to recv_pipe()
Args:
:type send_pipes: list of str
:type recv_pipes: list of str
:type send_pipes: list[str]
:type recv_pipes: list[str]
:type target_snapshot: ZfsDataset
:type features: list of str
:type features: list[str]
:type prev_snapshot: ZfsDataset
:type show_progress: bool
:type filter_properties: list of str
:type set_properties: list of str
:type filter_properties: list[str]
:type set_properties: list[str]
:type ignore_recv_exit_code: bool
:type resume_token: str
:type raw: bool
@ -715,20 +769,28 @@ class ZfsDataset:
self.debug("Transfer snapshot to {}".format(target_snapshot.filesystem_name))
if resume_token:
target_snapshot.verbose("resuming")
self.verbose("resuming")
# initial or increment
if not prev_snapshot:
target_snapshot.verbose("receiving full".format(self.snapshot_name))
self.verbose("-> {} (new)".format(target_snapshot.filesystem_name))
else:
# incremental
target_snapshot.verbose("receiving incremental".format(self.snapshot_name))
self.verbose("-> {}".format(target_snapshot.filesystem_name))
# do it
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
resume_token=resume_token, raw=raw, send_properties=send_properties,
write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes, force=force)
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code,
recv_pipes=recv_pipes, force=force)
# try to automount it, if its the initial transfer
if not prev_snapshot:
# in test mode it doesnt actually exist, so dont try to mount it/read properties
if not target_snapshot.zfs_node.readonly:
target_snapshot.parent.automount()
def abort_resume(self):
"""abort current resume state"""
@ -770,16 +832,16 @@ class ZfsDataset:
return None
def thin_list(self, keeps=None, ignores=None):
"""determines list of snapshots that should be kept or deleted based on
"""determines list[snapshots] that should be kept or deleted based on
the thinning schedule. cull the herd!
returns: ( keeps, obsoletes )
Args:
:param keeps: list of snapshots to always keep (usually the last)
:param keeps: list[snapshots] to always keep (usually the last)
:param ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
:type keeps: list of ZfsDataset
:type ignores: list of ZfsDataset
:type keeps: list[ZfsDataset]
:type ignores: list[ZfsDataset]
"""
if ignores is None:
@ -806,23 +868,29 @@ class ZfsDataset:
obsolete.destroy()
self.snapshots.remove(obsolete)
def find_common_snapshot(self, target_dataset):
def find_common_snapshot(self, target_dataset, guid_check):
"""find latest common snapshot between us and target returns None if its
an initial transfer
Args:
:type guid_check: bool
:type target_dataset: ZfsDataset
"""
if not target_dataset.snapshots:
# target has nothing yet
return None
else:
for source_snapshot in reversed(self.snapshots):
if target_dataset.find_snapshot(source_snapshot):
source_snapshot.debug("common snapshot")
target_snapshot = target_dataset.find_snapshot(source_snapshot)
if target_snapshot:
if guid_check and source_snapshot.properties['guid'] != target_snapshot.properties['guid']:
target_snapshot.warning("Common snapshot has invalid guid, ignoring.")
else:
target_snapshot.debug("common snapshot")
return source_snapshot
target_dataset.error("Cant find common snapshot with source.")
raise (Exception("You probably need to delete the target dataset to fix this."))
# target_dataset.error("Cant find common snapshot with source.")
raise (Exception("Cant find common snapshot with target."))
def find_start_snapshot(self, common_snapshot, also_other_snapshots):
"""finds first snapshot to send :rtype: ZfsDataset or None if we cant
@ -849,13 +917,16 @@ class ZfsDataset:
return start_snapshot
def find_incompatible_snapshots(self, common_snapshot):
"""returns a list of snapshots that is incompatible for a zfs recv onto
def find_incompatible_snapshots(self, common_snapshot, raw):
"""returns a list[snapshots] that is incompatible for a zfs recv onto
the common_snapshot. all direct followup snapshots with written=0 are
compatible.
in raw-mode nothing is compatible. issue #219
Args:
:type common_snapshot: ZfsDataset
:type raw: bool
"""
ret = []
@ -863,7 +934,7 @@ class ZfsDataset:
if common_snapshot and self.snapshots:
followup = True
for snapshot in self.snapshots[self.find_snapshot_index(common_snapshot) + 1:]:
if not followup or int(snapshot.properties['written']) != 0:
if raw or not followup or int(snapshot.properties['written']) != 0:
followup = False
ret.append(snapshot)
@ -873,8 +944,8 @@ class ZfsDataset:
"""only returns lists of allowed properties for this dataset type
Args:
:type filter_properties: list of str
:type set_properties: list of str
:type filter_properties: list[str]
:type set_properties: list[str]
"""
allowed_filter_properties = []
@ -906,7 +977,8 @@ class ZfsDataset:
while snapshot:
# create virtual target snapsho
# NOTE: with force_exist we're telling the dataset it doesnt exist yet. (e.g. its virtual)
virtual_snapshot = self.zfs_node.get_dataset(self.filesystem_name + "@" + snapshot.snapshot_name, force_exists=False)
virtual_snapshot = self.zfs_node.get_dataset(self.filesystem_name + "@" + snapshot.snapshot_name,
force_exists=False)
self.snapshots.append(virtual_snapshot)
snapshot = source_dataset.find_next_snapshot(snapshot, also_other_snapshots)
@ -916,9 +988,9 @@ class ZfsDataset:
Args:
:type common_snapshot: ZfsDataset
:type target_dataset: ZfsDataset
:type source_obsoletes: list of ZfsDataset
:type target_obsoletes: list of ZfsDataset
:type target_keeps: list of ZfsDataset
:type source_obsoletes: list[ZfsDataset]
:type target_obsoletes: list[ZfsDataset]
:type target_keeps: list[ZfsDataset]
"""
# on source: destroy all obsoletes before common. (since we cant send them anyways)
@ -940,7 +1012,7 @@ class ZfsDataset:
# on target: destroy everything thats obsolete, except common_snapshot
for target_snapshot in target_dataset.snapshots:
if (target_snapshot in target_obsoletes) \
and ( not common_snapshot or (target_snapshot.snapshot_name != common_snapshot.snapshot_name)):
and (not common_snapshot or (target_snapshot.snapshot_name != common_snapshot.snapshot_name)):
if target_snapshot.exists:
target_snapshot.destroy()
@ -952,8 +1024,8 @@ class ZfsDataset:
:type start_snapshot: ZfsDataset
"""
if 'receive_resume_token' in target_dataset.properties:
if start_snapshot==None:
if target_dataset.exists and 'receive_resume_token' in target_dataset.properties:
if start_snapshot == None:
target_dataset.verbose("Aborting resume, its obsolete.")
target_dataset.abort_resume()
else:
@ -966,20 +1038,22 @@ class ZfsDataset:
else:
return resume_token
def _plan_sync(self, target_dataset, also_other_snapshots):
def _plan_sync(self, target_dataset, also_other_snapshots, guid_check, raw):
"""plan where to start syncing and what to sync and what to keep
Args:
:rtype: ( ZfsDataset, ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset )
:rtype: ( ZfsDataset, ZfsDataset, list[ZfsDataset], list[ZfsDataset], list[ZfsDataset], list[ZfsDataset] )
:type target_dataset: ZfsDataset
:type also_other_snapshots: bool
:type guid_check: bool
:type raw: bool
"""
# determine common and start snapshot
target_dataset.debug("Determining start snapshot")
common_snapshot = self.find_common_snapshot(target_dataset)
common_snapshot = self.find_common_snapshot(target_dataset, guid_check=guid_check)
start_snapshot = self.find_start_snapshot(common_snapshot, also_other_snapshots)
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot)
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot, raw)
# let thinner decide whats obsolete on source
source_obsoletes = []
@ -1001,7 +1075,7 @@ class ZfsDataset:
what to do
Args:
:type incompatible_target_snapshots: list of ZfsDataset
:type incompatible_target_snapshots: list[ZfsDataset]
:type destroy_incompatible: bool
"""
@ -1009,42 +1083,60 @@ class ZfsDataset:
if not destroy_incompatible:
for snapshot in incompatible_target_snapshots:
snapshot.error("Incompatible snapshot")
raise (Exception("Please destroy incompatible snapshots or use --destroy-incompatible."))
raise (Exception("Please destroy incompatible snapshots on target, or use --destroy-incompatible."))
else:
for snapshot in incompatible_target_snapshots:
snapshot.verbose("Incompatible snapshot")
snapshot.destroy()
snapshot.destroy(fail_exception=True)
self.snapshots.remove(snapshot)
if len(incompatible_target_snapshots) > 0:
self.rollback()
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed, force):
no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed, force, guid_check):
"""sync this dataset's snapshots to target_dataset, while also thinning
out old snapshots along the way.
Args:
:type send_pipes: list of str
:type recv_pipes: list of str
:type send_pipes: list[str]
:type recv_pipes: list[str]
:type target_dataset: ZfsDataset
:type features: list of str
:type features: list[str]
:type show_progress: bool
:type filter_properties: list of str
:type set_properties: list of str
:type filter_properties: list[str]
:type set_properties: list[str]
:type ignore_recv_exit_code: bool
:type holds: bool
:type rollback: bool
:type decrypt: bool
:type also_other_snapshots: bool
:type no_send: bool
:type destroy_incompatible: bool
:type guid_check: bool
"""
self.verbose("sending to {}".format(target_dataset))
# self.verbose("-> {}".format(target_dataset))
# defaults for these settings if there is no encryption stuff going on:
send_properties = True
raw = False
write_embedded = True
# source dataset encrypted?
if self.properties.get('encryption', 'off') != 'off':
# user wants to send it over decrypted?
if decrypt:
# when decrypting, zfs cant send properties
send_properties = False
else:
# keep data encrypted by sending it raw (including properties)
raw = True
(common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps,
incompatible_target_snapshots) = \
self._plan_sync(target_dataset=target_dataset, also_other_snapshots=also_other_snapshots)
self._plan_sync(target_dataset=target_dataset, also_other_snapshots=also_other_snapshots,
guid_check=guid_check, raw=raw)
# NOTE: we do this because we dont want filesystems to fillup when backups keep failing.
# Also usefull with no_send to still cleanup stuff.
@ -1062,43 +1154,30 @@ class ZfsDataset:
# check if we can resume
resume_token = self._validate_resume_token(target_dataset, start_snapshot)
# rollback target to latest?
if rollback:
target_dataset.rollback()
#defaults for these settings if there is no encryption stuff going on:
send_properties = True
raw = False
write_embedded = True
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties, set_properties)
# source dataset encrypted?
if self.properties.get('encryption', 'off')!='off':
# user wants to send it over decrypted?
if decrypt:
# when decrypting, zfs cant send properties
send_properties=False
else:
# keep data encrypted by sending it raw (including properties)
raw=True
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties,
set_properties)
# encrypt at target?
if encrypt and not raw:
# filter out encryption properties to let encryption on the target take place
active_filter_properties.extend(["keylocation","pbkdf2iters","keyformat", "encryption"])
write_embedded=False
active_filter_properties.extend(["keylocation", "pbkdf2iters", "keyformat", "encryption"])
write_embedded = False
# now actually transfer the snapshots
prev_source_snapshot = common_snapshot
source_snapshot = start_snapshot
do_rollback = rollback
while source_snapshot:
target_snapshot = target_dataset.find_snapshot(source_snapshot) # still virtual
# does target actually want it?
if target_snapshot not in target_obsoletes:
# do the rollback, one time at first transfer
if do_rollback:
target_dataset.rollback()
do_rollback = False
source_snapshot.transfer_snapshot(target_snapshot, features=features,
prev_snapshot=prev_source_snapshot, show_progress=show_progress,
filter_properties=active_filter_properties,
@ -1151,15 +1230,14 @@ class ZfsDataset:
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
def unmount(self):
def unmount(self, mount_point):
self.debug("Unmounting")
cmd = [
"umount", self.name
"umount", mount_point
]
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
def clone(self, name):
@ -1200,4 +1278,3 @@ class ZfsDataset:
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
self.invalidate()

View File

@ -12,6 +12,7 @@ from .CachedProperty import CachedProperty
from .ZfsPool import ZfsPool
from .ZfsDataset import ZfsDataset
from .ExecuteNode import ExecuteError
from .util import datetime_now
class ZfsNode(ExecuteNode):
@ -59,7 +60,8 @@ class ZfsNode(ExecuteNode):
def thin(self, objects, keep_objects):
# NOTE: if thinning is disabled with --no-thinning, self.__thinner will be none.
if self.__thinner is not None:
return self.__thinner.thin(objects, keep_objects)
return self.__thinner.thin(objects, keep_objects, datetime_now(self.utc).timestamp())
else:
return (keep_objects, [])
@ -235,10 +237,10 @@ class ZfsNode(ExecuteNode):
except Exception as e:
pass
def selected_datasets(self, property_name, exclude_received, exclude_paths, exclude_unchanged, min_change):
def selected_datasets(self, property_name, exclude_received, exclude_paths, exclude_unchanged):
"""determine filesystems that should be backed up by looking at the special autobackup-property, systemwide
returns: list of ZfsDataset
returns: ( list of selected ZfsDataset, list of excluded ZfsDataset)
"""
self.debug("Getting selected datasets")
@ -249,8 +251,10 @@ class ZfsNode(ExecuteNode):
property_name
])
# The returnlist of selected ZfsDataset's:
selected_filesystems = []
excluded_filesystems = []
# list of sources, used to resolve inherited sources
sources = {}
@ -270,9 +274,14 @@ class ZfsNode(ExecuteNode):
source = raw_source
# determine it
if dataset.is_selected(value=value, source=source, inherited=inherited, exclude_received=exclude_received,
exclude_paths=exclude_paths, exclude_unchanged=exclude_unchanged,
min_change=min_change):
selected_filesystems.append(dataset)
selected=dataset.is_selected(value=value, source=source, inherited=inherited, exclude_received=exclude_received,
exclude_paths=exclude_paths, exclude_unchanged=exclude_unchanged)
return selected_filesystems
if selected==True:
selected_filesystems.append(dataset)
elif selected==False:
excluded_filesystems.append(dataset)
#returns None when no property is set.
return ( selected_filesystems, excluded_filesystems)

View File

@ -1,129 +0,0 @@
import os.path
import os
import subprocess
import sys
import time
from signal import signal, SIGPIPE
import util
signal(SIGPIPE, util.sigpipe_handler)
try:
print ("voor eerste")
raise Exception("eerstre")
except Exception as e:
print ("voor tweede")
raise Exception("tweede")
finally:
print ("JO")
def generator():
try:
util.deb('in generator')
print ("TRIGGER SIGPIPE")
sys.stdout.flush()
util.deb('after trigger')
# if False:
yield ("bla")
# yield ("bla")
except GeneratorExit as e:
util.deb('GENEXIT '+str(e))
raise
except Exception as e:
util.deb('EXCEPT '+str(e))
finally:
util.deb('FINALLY')
print("nog iets")
sys.stdout.flush()
util.deb('after print in finally WOOP!')
util.deb('START')
g=generator()
util.deb('after generator')
for bla in g:
# print ("heb wat ontvangen")
util.deb('ontvangen van gen')
break
# raise Exception("moi")
pass
raise Exception("moi")
util.deb('after for')
while True:
pass
#
# with open('test.py', 'rb') as fh:
#
# # fsize = fh.seek(10000, os.SEEK_END)
# # print(fsize)
#
# start=time.time()
# for i in range(0,1000000):
# # fh.seek(0, 0)
# fsize=fh.seek(0, os.SEEK_END)
# # fsize=fh.tell()
# # os.path.getsize('test.py')
# print(time.time()-start)
#
#
# print(fh.tell())
#
# sys.exit(0)
#
#
#
# checked=1
# skipped=1
# coverage=0.1
#
# max_skip=0
#
#
# skipinarow=0
# while True:
# total=checked+skipped
#
# skip=coverage<random()
# if skip:
# skipped = skipped + 1
# print("S {:.2f}%".format(checked * 100 / total))
#
# skipinarow = skipinarow+1
# if skipinarow>max_skip:
# max_skip=skipinarow
# else:
# skipinarow=0
# checked=checked+1
# print("C {:.2f}%".format(checked * 100 / total))
#
# print(max_skip)
#
# skip=0
# while True:
#
# total=checked+skipped
# if skip>0:
# skip=skip-1
# skipped = skipped + 1
# print("S {:.2f}%".format(checked * 100 / total))
# else:
# checked=checked+1
# print("C {:.2f}%".format(checked * 100 / total))
#
# #calc new skip
# skip=skip+((1/coverage)-1)*(random()*2)
# # print(skip)
# if skip> max_skip:
# max_skip=skip
#
# print(max_skip)

View File

@ -1,21 +1,9 @@
# root@psyt14s:/home/psy/zfs_autobackup# ls -lh /home/psy/Downloads/carimage.zip
# -rw-rw-r-- 1 psy psy 990M Nov 26 2020 /home/psy/Downloads/carimage.zip
# root@psyt14s:/home/psy/zfs_autobackup# time sha1sum /home/psy/Downloads/carimage.zip
# a682e1a36e16fe0d0c2f011104f4a99004f19105 /home/psy/Downloads/carimage.zip
#
# real 0m2.558s
# user 0m2.105s
# sys 0m0.448s
# root@psyt14s:/home/psy/zfs_autobackup# time python3 -m zfs_autobackup.ZfsCheck
#
# real 0m1.459s
# user 0m0.993s
# sys 0m0.462s
# NOTE: surprisingly sha1 in via python3 is faster than the native sha1sum utility, even in the way we use below!
import os
import platform
import sys
from datetime import datetime
def tmp_name(suffix=""):
@ -48,7 +36,7 @@ def output_redir():
def sigpipe_handler(sig, stack):
#redir output so we dont get more SIGPIPES during cleanup. (which my try to write to stdout)
output_redir()
deb('redir')
#deb('redir')
# def check_output():
# """make sure stdout still functions. if its broken, this will trigger a SIGPIPE which will be handled by the sigpipe_handler."""
@ -63,3 +51,13 @@ def sigpipe_handler(sig, stack):
# fh.write("DEB: "+txt+"\n")
# This should be the only source of trueth for the current datetime.
# This function will be mocked during unit testing.
datetime_now_mock=None
def datetime_now(utc):
if datetime_now_mock is None:
return( datetime.utcnow() if utc else datetime.now())
else:
return datetime_now_mock