Compare commits
100 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| fc3026abdc | |||
| 0b1081e87f | |||
| 8699ec5c69 | |||
| cba6470500 | |||
| d08f7bf3c1 | |||
| d19cb2c842 | |||
| f2b284c407 | |||
| a6cdd4b89e | |||
| 8176326126 | |||
| ad2542e930 | |||
| b926c86a7b | |||
| 915a29b36e | |||
| f363142926 | |||
| 27c598344b | |||
| ce817eb05c | |||
| b97bde3f6d | |||
| fa14dcdce1 | |||
| c34bf22f4e | |||
| 01425e735d | |||
| 56a2f26dfa | |||
| 8729fcac74 | |||
| 6151096dc8 | |||
| 1e8b02db28 | |||
| 6d69c8f2b4 | |||
| fc853622dd | |||
| 37a9f49d8d | |||
| ff33f46cb8 | |||
| e1610b6874 | |||
| 10c45affd7 | |||
| 4d12b8da5f | |||
| 58e098324e | |||
| 1ffd9a15a3 | |||
| e54c275685 | |||
| ee4fade2e4 | |||
| c5f1a87c40 | |||
| 2fe854905d | |||
| 1c86c6f866 | |||
| 8bb9769a8b | |||
| 3ef7c32237 | |||
| c254ad3b82 | |||
| fb7da316f8 | |||
| fedae35221 | |||
| 1cb26d48b6 | |||
| 87e0599130 | |||
| 252086e2e6 | |||
| 4d15f29b5b | |||
| 3bc37d143c | |||
| 4dc4bdbba5 | |||
| d2fe9b9ec7 | |||
| 2143d22ae5 | |||
| 138c913e58 | |||
| 2305fdf033 | |||
| 797fb23baa | |||
| 82c7ac5e53 | |||
| 293ab1d075 | |||
| 50e94baf4e | |||
| 47bd4ed490 | |||
| 0d26420b15 | |||
| 3e243a0916 | |||
| 499ccc6fd0 | |||
| ca294f9dd6 | |||
| 9772fc80cf | |||
| 83905c4614 | |||
| 7a3c309123 | |||
| 022dc1b7fc | |||
| 136289b4d6 | |||
| 5bf49cf19e | |||
| 735938eded | |||
| d83fa2f97f | |||
| b0db6d13cc | |||
| c7762d8163 | |||
| bc17825582 | |||
| b22113aad4 | |||
| 4e1bfd8cba | |||
| 0388026f94 | |||
| b718e282b1 | |||
| b6fb07a436 | |||
| 4f78f0cd22 | |||
| 2fa95f098b | |||
| c864e5ffad | |||
| 6f6a2ceee2 | |||
| 0813a8cef6 | |||
| 55f491915a | |||
| 04971f2f29 | |||
| e1344dd9da | |||
| ea390df6f6 | |||
| 9be1f334cb | |||
| de877362c9 | |||
| 9b1254a6d9 | |||
| c110943f20 | |||
| e94eb11f63 | |||
| 0d498e3f44 | |||
| dd301dc422 | |||
| 9e6d90adfe | |||
| a6b688c976 | |||
| 10f1290ad9 | |||
| b51eefa139 | |||
| 805d7e3536 | |||
| 8f0472e8f5 | |||
| 002aa6a731 |
3
.gitignore
vendored
3
.gitignore
vendored
@ -6,3 +6,6 @@ build/
|
|||||||
zfs_autobackup.egg-info
|
zfs_autobackup.egg-info
|
||||||
.eggs/
|
.eggs/
|
||||||
__pycache__
|
__pycache__
|
||||||
|
.coverage
|
||||||
|
*.pyc
|
||||||
|
python2.env
|
||||||
|
|||||||
31
.travis.yml
Normal file
31
.travis.yml
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
|
||||||
|
jobs:
|
||||||
|
include:
|
||||||
|
- os: linux
|
||||||
|
dist: xenial
|
||||||
|
language: python
|
||||||
|
python: 2.7
|
||||||
|
- os: linux
|
||||||
|
dist: xenial
|
||||||
|
language: python
|
||||||
|
python: 3.6
|
||||||
|
- os: linux
|
||||||
|
dist: bionic
|
||||||
|
language: python
|
||||||
|
python: 2.7
|
||||||
|
- os: linux
|
||||||
|
dist: bionic
|
||||||
|
language: python
|
||||||
|
python: 3.6
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
before_install:
|
||||||
|
- sudo apt-get update
|
||||||
|
- sudo apt-get install zfsutils-linux
|
||||||
|
|
||||||
|
script:
|
||||||
|
# - sudo -E ./ngrok.sh
|
||||||
|
- sudo -E ./run_tests
|
||||||
|
# - sudo -E pip --version
|
||||||
20
README.md
20
README.md
@ -1,9 +1,12 @@
|
|||||||
# ZFS autobackup
|
# ZFS autobackup
|
||||||
|
|
||||||
|
[](https://coveralls.io/github/psy0rz/zfs_autobackup) [](https://travis-ci.org/psy0rz/zfs_autobackup)
|
||||||
|
|
||||||
## New in v3
|
## New in v3
|
||||||
|
|
||||||
* Complete rewrite, cleaner object oriented code.
|
* Complete rewrite, cleaner object oriented code.
|
||||||
* Python 3 and 2 support.
|
* Python 3 and 2 support.
|
||||||
|
* Automated regression against real ZFS environment.
|
||||||
* Installable via [pip](https://pypi.org/project/zfs-autobackup/).
|
* Installable via [pip](https://pypi.org/project/zfs-autobackup/).
|
||||||
* Backwards compatible with your current backups and parameters.
|
* Backwards compatible with your current backups and parameters.
|
||||||
* Progressive thinning (via a destroy schedule. default schedule should be fine for most people)
|
* Progressive thinning (via a destroy schedule. default schedule should be fine for most people)
|
||||||
@ -19,6 +22,7 @@
|
|||||||
* Supports raw backups for encryption.
|
* Supports raw backups for encryption.
|
||||||
* Custom SSH client config.
|
* Custom SSH client config.
|
||||||
|
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
This is a tool I wrote to make replicating ZFS datasets easy and reliable. You can either use it as a backup tool or as a replication tool.
|
This is a tool I wrote to make replicating ZFS datasets easy and reliable. You can either use it as a backup tool or as a replication tool.
|
||||||
@ -84,7 +88,7 @@ On older servers you might have to use easy_install
|
|||||||
|
|
||||||
Its also possible to just download <https://raw.githubusercontent.com/psy0rz/zfs_autobackup/master/bin/zfs-autobackup> and run it directly.
|
Its also possible to just download <https://raw.githubusercontent.com/psy0rz/zfs_autobackup/master/bin/zfs-autobackup> and run it directly.
|
||||||
|
|
||||||
The only requirement that is sometimes missing is the `argparse` python module. Optionally you can install `colorma` for colors.
|
The only requirement that is sometimes missing is the `argparse` python module. Optionally you can install `colorama` for colors.
|
||||||
|
|
||||||
It should work with python 2.7 and higher.
|
It should work with python 2.7 and higher.
|
||||||
|
|
||||||
@ -286,7 +290,7 @@ You can specify as many rules as you need. The order of the rules doesn't matter
|
|||||||
|
|
||||||
Keep in mind its up to you to actually run zfs-autobackup often enough: If you want to keep hourly snapshots, you have to make sure you at least run it every hour.
|
Keep in mind its up to you to actually run zfs-autobackup often enough: If you want to keep hourly snapshots, you have to make sure you at least run it every hour.
|
||||||
|
|
||||||
However, its no problem if you run it more or less often than that: The thinner will still do its best to choose an optimal set of snapshots to choose.
|
However, its no problem if you run it more or less often than that: The thinner will still keep an optimal set of snapshots to match your schedule as good as possible.
|
||||||
|
|
||||||
If you want to keep as few snapshots as possible, just specify 0. (`--keep-source=0` for example)
|
If you want to keep as few snapshots as possible, just specify 0. (`--keep-source=0` for example)
|
||||||
|
|
||||||
@ -324,7 +328,7 @@ Snapshots on the source that still have to be send to the target wont be destroy
|
|||||||
## Tips
|
## Tips
|
||||||
|
|
||||||
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
|
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
|
||||||
* You can split up the snapshotting and sending tasks by creating two cronjobs. Use ```--no-send``` for the snapshotter-cronjob and use ```--no-snapshot``` for the send-cronjob. This is usefull if you only want to send at night or if your send take too long.
|
* You can split up the snapshotting and sending tasks by creating two cronjobs. Use ```--no-send``` for the snapshotter-cronjob and use ```--no-snapshot``` for the send-cronjob. This is useful if you only want to send at night or if your send take too long.
|
||||||
* Set the ```readonly``` property of the target filesystem to ```on```. This prevents changes on the target side. (Normally, if there are changes the next backup will fail and will require a zfs rollback.) Note that readonly means you cant change the CONTENTS of the dataset directly. Its still possible to receive new datasets and manipulate properties etc.
|
* Set the ```readonly``` property of the target filesystem to ```on```. This prevents changes on the target side. (Normally, if there are changes the next backup will fail and will require a zfs rollback.) Note that readonly means you cant change the CONTENTS of the dataset directly. Its still possible to receive new datasets and manipulate properties etc.
|
||||||
* Use ```--clear-refreservation``` to save space on your backup server.
|
* Use ```--clear-refreservation``` to save space on your backup server.
|
||||||
* Use ```--clear-mountpoint``` to prevent the target server from mounting the backupped filesystem in the wrong place during a reboot.
|
* Use ```--clear-mountpoint``` to prevent the target server from mounting the backupped filesystem in the wrong place during a reboot.
|
||||||
@ -409,9 +413,9 @@ optional arguments:
|
|||||||
10,1d1w,1w1m,1m1y
|
10,1d1w,1w1m,1m1y
|
||||||
--other-snapshots Send over other snapshots as well, not just the ones
|
--other-snapshots Send over other snapshots as well, not just the ones
|
||||||
created by this tool.
|
created by this tool.
|
||||||
--no-snapshot Dont create new snapshots (usefull for finishing
|
--no-snapshot Dont create new snapshots (useful for finishing
|
||||||
uncompleted backups, or cleanups)
|
uncompleted backups, or cleanups)
|
||||||
--no-send Dont send snapshots (usefull for cleanups, or if you
|
--no-send Dont send snapshots (useful for cleanups, or if you
|
||||||
want a separate send-cronjob)
|
want a separate send-cronjob)
|
||||||
--min-change MIN_CHANGE
|
--min-change MIN_CHANGE
|
||||||
Number of bytes written after which we consider a
|
Number of bytes written after which we consider a
|
||||||
@ -419,9 +423,9 @@ optional arguments:
|
|||||||
--allow-empty If nothing has changed, still create empty snapshots.
|
--allow-empty If nothing has changed, still create empty snapshots.
|
||||||
(same as --min-change=0)
|
(same as --min-change=0)
|
||||||
--ignore-replicated Ignore datasets that seem to be replicated some other
|
--ignore-replicated Ignore datasets that seem to be replicated some other
|
||||||
way. (No changes since lastest snapshot. Usefull for
|
way. (No changes since lastest snapshot. Useful for
|
||||||
proxmox HA replication)
|
proxmox HA replication)
|
||||||
--no-holds Dont lock snapshots on the source. (Usefull to allow
|
--no-holds Dont lock snapshots on the source. (Useful to allow
|
||||||
proxmox HA replication to switches nodes)
|
proxmox HA replication to switches nodes)
|
||||||
--resume Support resuming of interrupted transfers by using the
|
--resume Support resuming of interrupted transfers by using the
|
||||||
zfs extensible_dataset feature (both zpools should
|
zfs extensible_dataset feature (both zpools should
|
||||||
@ -454,7 +458,7 @@ optional arguments:
|
|||||||
care! (implies --rollback)
|
care! (implies --rollback)
|
||||||
--ignore-transfer-errors
|
--ignore-transfer-errors
|
||||||
Ignore transfer errors (still checks if received
|
Ignore transfer errors (still checks if received
|
||||||
filesystem exists. usefull for acltype errors)
|
filesystem exists. useful for acltype errors)
|
||||||
--raw For encrypted datasets, send data exactly as it exists
|
--raw For encrypted datasets, send data exactly as it exists
|
||||||
on disk.
|
on disk.
|
||||||
--test dont change anything, just show what would be done
|
--test dont change anything, just show what would be done
|
||||||
|
|||||||
91
basetest.py
Normal file
91
basetest.py
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
import random
|
||||||
|
|
||||||
|
#default test stuff
|
||||||
|
import unittest2
|
||||||
|
import subprocess
|
||||||
|
import time
|
||||||
|
from pprint import *
|
||||||
|
from bin.zfs_autobackup import *
|
||||||
|
from mock import *
|
||||||
|
import contextlib
|
||||||
|
import sys
|
||||||
|
import io
|
||||||
|
|
||||||
|
TEST_POOLS="test_source1 test_source2 test_target1"
|
||||||
|
ZFS_USERSPACE= subprocess.check_output("dpkg-query -W zfsutils-linux |cut -f2", shell=True).decode('utf-8').rstrip()
|
||||||
|
ZFS_KERNEL= subprocess.check_output("modinfo zfs|grep ^version |sed 's/.* //'", shell=True).decode('utf-8').rstrip()
|
||||||
|
|
||||||
|
print("###########################################")
|
||||||
|
print("#### Unit testing against:")
|
||||||
|
print("#### Python :"+sys.version.replace("\n", " "))
|
||||||
|
print("#### ZFS userspace :"+ZFS_USERSPACE)
|
||||||
|
print("#### ZFS kernel :"+ZFS_KERNEL)
|
||||||
|
print("#############################################")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# for python2 compatibility
|
||||||
|
if sys.version_info.major==2:
|
||||||
|
OutputIO=io.BytesIO
|
||||||
|
else:
|
||||||
|
OutputIO=io.StringIO
|
||||||
|
|
||||||
|
|
||||||
|
# for python2 compatibility (python 3 has this already)
|
||||||
|
@contextlib.contextmanager
|
||||||
|
def redirect_stdout(target):
|
||||||
|
original = sys.stdout
|
||||||
|
try:
|
||||||
|
sys.stdout = target
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
sys.stdout = original
|
||||||
|
|
||||||
|
# for python2 compatibility (python 3 has this already)
|
||||||
|
@contextlib.contextmanager
|
||||||
|
def redirect_stderr(target):
|
||||||
|
original = sys.stderr
|
||||||
|
try:
|
||||||
|
sys.stderr = target
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
sys.stderr = original
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def shelltest(cmd):
|
||||||
|
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
|
||||||
|
ret=(subprocess.check_output(cmd , shell=True).decode('utf-8'))
|
||||||
|
print("######### result of: {}".format(cmd))
|
||||||
|
print(ret)
|
||||||
|
print("#########")
|
||||||
|
ret='\n'+ret
|
||||||
|
return(ret)
|
||||||
|
|
||||||
|
def prepare_zpools():
|
||||||
|
print("Preparing zfs filesystems...")
|
||||||
|
|
||||||
|
#need ram blockdevice
|
||||||
|
subprocess.check_call("modprobe brd rd_size=512000", shell=True)
|
||||||
|
|
||||||
|
#remove old stuff
|
||||||
|
subprocess.call("zpool destroy test_source1 2>/dev/null", shell=True)
|
||||||
|
subprocess.call("zpool destroy test_source2 2>/dev/null", shell=True)
|
||||||
|
subprocess.call("zpool destroy test_target1 2>/dev/null", shell=True)
|
||||||
|
|
||||||
|
#create pools
|
||||||
|
subprocess.check_call("zpool create test_source1 /dev/ram0", shell=True)
|
||||||
|
subprocess.check_call("zpool create test_source2 /dev/ram1", shell=True)
|
||||||
|
subprocess.check_call("zpool create test_target1 /dev/ram2", shell=True)
|
||||||
|
|
||||||
|
#create test structure
|
||||||
|
subprocess.check_call("zfs create -p test_source1/fs1/sub", shell=True)
|
||||||
|
subprocess.check_call("zfs create -p test_source2/fs2/sub", shell=True)
|
||||||
|
subprocess.check_call("zfs create -p test_source2/fs3/sub", shell=True)
|
||||||
|
subprocess.check_call("zfs set autobackup:test=true test_source1/fs1", shell=True)
|
||||||
|
subprocess.check_call("zfs set autobackup:test=child test_source2/fs2", shell=True)
|
||||||
|
|
||||||
|
print("Prepare done")
|
||||||
@ -1,7 +1,7 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# -*- coding: utf8 -*-
|
# -*- coding: utf8 -*-
|
||||||
|
|
||||||
# (c)edwin@datux.nl - Released under GPL
|
# (c)edwin@datux.nl - Released under GPL V3
|
||||||
#
|
#
|
||||||
# Greetings from eth0 2019 :)
|
# Greetings from eth0 2019 :)
|
||||||
|
|
||||||
@ -26,7 +26,7 @@ if sys.stdout.isatty():
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
VERSION="3.0-rc9"
|
VERSION="3.0-rc11"
|
||||||
HEADER="zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)\n".format(VERSION)
|
HEADER="zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)\n".format(VERSION)
|
||||||
|
|
||||||
class Log:
|
class Log:
|
||||||
@ -40,6 +40,7 @@ class Log:
|
|||||||
print(colorama.Fore.RED+colorama.Style.BRIGHT+ "! "+txt+colorama.Style.RESET_ALL, file=sys.stderr)
|
print(colorama.Fore.RED+colorama.Style.BRIGHT+ "! "+txt+colorama.Style.RESET_ALL, file=sys.stderr)
|
||||||
else:
|
else:
|
||||||
print("! "+txt, file=sys.stderr)
|
print("! "+txt, file=sys.stderr)
|
||||||
|
sys.stderr.flush()
|
||||||
|
|
||||||
def verbose(self, txt):
|
def verbose(self, txt):
|
||||||
if self.show_verbose:
|
if self.show_verbose:
|
||||||
@ -47,6 +48,7 @@ class Log:
|
|||||||
print(colorama.Style.NORMAL+ " "+txt+colorama.Style.RESET_ALL)
|
print(colorama.Style.NORMAL+ " "+txt+colorama.Style.RESET_ALL)
|
||||||
else:
|
else:
|
||||||
print(" "+txt)
|
print(" "+txt)
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
def debug(self, txt):
|
def debug(self, txt):
|
||||||
if self.show_debug:
|
if self.show_debug:
|
||||||
@ -54,6 +56,7 @@ class Log:
|
|||||||
print(colorama.Fore.GREEN+ "# "+txt+colorama.Style.RESET_ALL)
|
print(colorama.Fore.GREEN+ "# "+txt+colorama.Style.RESET_ALL)
|
||||||
else:
|
else:
|
||||||
print("# "+txt)
|
print("# "+txt)
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -223,53 +226,6 @@ class Thinner:
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
# ######### Thinner testing code
|
|
||||||
# now=int(time.time())
|
|
||||||
#
|
|
||||||
# t=Thinner("1d1w,1w1m,1m6m,1y2y", always_keep=1)
|
|
||||||
#
|
|
||||||
# import random
|
|
||||||
#
|
|
||||||
# class Thing:
|
|
||||||
# def __init__(self, timestamp):
|
|
||||||
# self.timestamp=timestamp
|
|
||||||
#
|
|
||||||
# def __str__(self):
|
|
||||||
# age=now-self.timestamp
|
|
||||||
# struct=time.localtime(self.timestamp)
|
|
||||||
# return("{} ({} days old)".format(time.strftime("%Y-%m-%d %H:%M:%S",struct),int(age/(3600*24))))
|
|
||||||
#
|
|
||||||
# def test():
|
|
||||||
# global now
|
|
||||||
# things=[]
|
|
||||||
#
|
|
||||||
# while True:
|
|
||||||
# print("#################### {}".format(time.strftime("%Y-%m-%d %H:%M:%S",time.localtime(now))))
|
|
||||||
#
|
|
||||||
# (keeps, removes)=t.run(things, now)
|
|
||||||
#
|
|
||||||
# print ("### KEEP ")
|
|
||||||
# for thing in keeps:
|
|
||||||
# print(thing)
|
|
||||||
#
|
|
||||||
# print ("### REMOVE ")
|
|
||||||
# for thing in removes:
|
|
||||||
# print(thing)
|
|
||||||
#
|
|
||||||
# things=keeps
|
|
||||||
#
|
|
||||||
# #increase random amount of time and maybe add a thing
|
|
||||||
# now=now+random.randint(0,160000)
|
|
||||||
# if random.random()>=0:
|
|
||||||
# things.append(Thing(now))
|
|
||||||
#
|
|
||||||
# sys.stdin.readline()
|
|
||||||
#
|
|
||||||
# test()
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class cached_property(object):
|
class cached_property(object):
|
||||||
""" A property that is only computed once per instance and then replaces
|
""" A property that is only computed once per instance and then replaces
|
||||||
itself with an ordinary attribute. Deleting the attribute resets the
|
itself with an ordinary attribute. Deleting the attribute resets the
|
||||||
@ -298,17 +254,28 @@ class cached_property(object):
|
|||||||
|
|
||||||
return obj._cached_properties[propname]
|
return obj._cached_properties[propname]
|
||||||
|
|
||||||
|
class Logger():
|
||||||
|
|
||||||
|
#simple logging stubs
|
||||||
|
def debug(self, txt):
|
||||||
|
print("DEBUG : "+txt)
|
||||||
|
|
||||||
|
def verbose(self, txt):
|
||||||
|
print("VERBOSE: "+txt)
|
||||||
|
|
||||||
|
def error(self, txt):
|
||||||
|
print("ERROR : "+txt)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class ExecuteNode:
|
class ExecuteNode(Logger):
|
||||||
"""an endpoint to execute local or remote commands via ssh"""
|
"""an endpoint to execute local or remote commands via ssh"""
|
||||||
|
|
||||||
|
|
||||||
def __init__(self, ssh_config=None, ssh_to=None, readonly=False, debug_output=False):
|
def __init__(self, ssh_config=None, ssh_to=None, readonly=False, debug_output=False):
|
||||||
"""ssh_config: custom ssh config
|
"""ssh_config: custom ssh config
|
||||||
ssh_to: server you want to ssh to. none means local
|
ssh_to: server you want to ssh to. none means local
|
||||||
readonly: only execute commands that don't make any changes (usefull for testing-runs)
|
readonly: only execute commands that don't make any changes (useful for testing-runs)
|
||||||
debug_output: show output and exit codes of commands in debugging output.
|
debug_output: show output and exit codes of commands in debugging output.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@ -346,11 +313,14 @@ class ExecuteNode:
|
|||||||
|
|
||||||
def run(self, cmd, input=None, tab_split=False, valid_exitcodes=[ 0 ], readonly=False, hide_errors=False, pipe=False, return_stderr=False):
|
def run(self, cmd, input=None, tab_split=False, valid_exitcodes=[ 0 ], readonly=False, hide_errors=False, pipe=False, return_stderr=False):
|
||||||
"""run a command on the node
|
"""run a command on the node
|
||||||
|
cmd: the actual command, should be a list, where the first item is the command and the rest are parameters.
|
||||||
readonly: make this True if the command doesn't make any changes and is safe to execute in testmode
|
|
||||||
pipe: Instead of executing, return a pipe-handle to be used to input to another run() command. (just like a | in linux)
|
|
||||||
input: Can be None, a string or a pipe-handle you got from another run()
|
input: Can be None, a string or a pipe-handle you got from another run()
|
||||||
return_stderr: return both stdout and stderr as a tuple
|
tab_split: split tabbed files in output into a list
|
||||||
|
valid_exitcodes: list of valid exit codes for this command (checks exit code of both sides of a pipe)
|
||||||
|
readonly: make this True if the command doesn't make any changes and is safe to execute in testmode
|
||||||
|
hide_errors: don't show stderr output as error, instead show it as debugging output (use to hide expected errors)
|
||||||
|
pipe: Instead of executing, return a pipe-handle to be used to input to another run() command. (just like a | in linux)
|
||||||
|
return_stderr: return both stdout and stderr as a tuple. (only returns stderr from this side of the pipe)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
encoded_cmd=[]
|
encoded_cmd=[]
|
||||||
@ -368,7 +338,9 @@ class ExecuteNode:
|
|||||||
#(this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
|
#(this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
|
||||||
for arg in cmd:
|
for arg in cmd:
|
||||||
#add single quotes for remote commands to support spaces and other weird stuff (remote commands are executed in a shell)
|
#add single quotes for remote commands to support spaces and other weird stuff (remote commands are executed in a shell)
|
||||||
encoded_cmd.append( ("'"+arg+"'").encode('utf-8'))
|
#and escape existing single quotes (bash needs ' to end the quoted string, then a \' for the actual quote and then another ' to start a new quoted string)
|
||||||
|
#(and then python needs the double \ to get a single \)
|
||||||
|
encoded_cmd.append( ("'" + arg.replace("'","'\\''") + "'").encode('utf-8'))
|
||||||
|
|
||||||
else:
|
else:
|
||||||
for arg in cmd:
|
for arg in cmd:
|
||||||
@ -392,7 +364,8 @@ class ExecuteNode:
|
|||||||
|
|
||||||
#determine stdin
|
#determine stdin
|
||||||
if input==None:
|
if input==None:
|
||||||
stdin=None
|
#NOTE: Not None, otherwise it reads stdin from terminal!
|
||||||
|
stdin=subprocess.PIPE
|
||||||
elif isinstance(input,str) or type(input)=='unicode':
|
elif isinstance(input,str) or type(input)=='unicode':
|
||||||
self.debug("INPUT > \n"+input.rstrip())
|
self.debug("INPUT > \n"+input.rstrip())
|
||||||
stdin=subprocess.PIPE
|
stdin=subprocess.PIPE
|
||||||
@ -411,8 +384,12 @@ class ExecuteNode:
|
|||||||
|
|
||||||
#Note: make streaming?
|
#Note: make streaming?
|
||||||
if isinstance(input,str) or type(input)=='unicode':
|
if isinstance(input,str) or type(input)=='unicode':
|
||||||
p.stdin.write(input)
|
p.stdin.write(input.encode('utf-8'))
|
||||||
|
|
||||||
|
if p.stdin:
|
||||||
|
p.stdin.close()
|
||||||
|
|
||||||
|
#return pipe
|
||||||
if pipe:
|
if pipe:
|
||||||
return(p)
|
return(p)
|
||||||
|
|
||||||
@ -459,28 +436,98 @@ class ExecuteNode:
|
|||||||
if p.poll()!=None and ((not isinstance(input, subprocess.Popen)) or input.poll()!=None) and eof_count==len(selectors):
|
if p.poll()!=None and ((not isinstance(input, subprocess.Popen)) or input.poll()!=None) and eof_count==len(selectors):
|
||||||
break
|
break
|
||||||
|
|
||||||
|
p.stderr.close()
|
||||||
|
p.stdout.close()
|
||||||
|
|
||||||
if self.debug_output:
|
if self.debug_output:
|
||||||
self.debug("EXIT > {}".format(p.returncode))
|
self.debug("EXIT > {}".format(p.returncode))
|
||||||
|
|
||||||
#handle piped process error output and exit codes
|
#handle piped process error output and exit codes
|
||||||
if isinstance(input, subprocess.Popen):
|
if isinstance(input, subprocess.Popen):
|
||||||
|
input.stderr.close()
|
||||||
|
input.stdout.close()
|
||||||
|
|
||||||
if self.debug_output:
|
if self.debug_output:
|
||||||
self.debug("EXIT |> {}".format(input.returncode))
|
self.debug("EXIT |> {}".format(input.returncode))
|
||||||
if valid_exitcodes and input.returncode not in valid_exitcodes:
|
if valid_exitcodes and input.returncode not in valid_exitcodes:
|
||||||
raise(subprocess.CalledProcessError(input.returncode, "(pipe)"))
|
raise(subprocess.CalledProcessError(input.returncode, "(pipe)"))
|
||||||
|
|
||||||
|
|
||||||
if valid_exitcodes and p.returncode not in valid_exitcodes:
|
if valid_exitcodes and p.returncode not in valid_exitcodes:
|
||||||
raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
|
raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
|
||||||
|
|
||||||
|
|
||||||
if return_stderr:
|
if return_stderr:
|
||||||
return ( output_lines, error_lines )
|
return ( output_lines, error_lines )
|
||||||
else:
|
else:
|
||||||
return(output_lines)
|
return(output_lines)
|
||||||
|
|
||||||
|
|
||||||
|
class ZfsPool():
|
||||||
|
"""a zfs pool"""
|
||||||
|
|
||||||
|
def __init__(self, zfs_node, name):
|
||||||
|
"""name: name of the pool
|
||||||
|
"""
|
||||||
|
|
||||||
|
self.zfs_node=zfs_node
|
||||||
|
self.name=name
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return("{}: {}".format(self.zfs_node, self.name))
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return(self.name)
|
||||||
|
|
||||||
|
def __eq__(self, obj):
|
||||||
|
if not isinstance(obj, ZfsPool):
|
||||||
|
return(False)
|
||||||
|
|
||||||
|
return(self.name == obj.name)
|
||||||
|
|
||||||
|
def verbose(self,txt):
|
||||||
|
self.zfs_node.verbose("zpool {}: {}".format(self.name, txt))
|
||||||
|
|
||||||
|
def error(self,txt):
|
||||||
|
self.zfs_node.error("zpool {}: {}".format(self.name, txt))
|
||||||
|
|
||||||
|
def debug(self,txt):
|
||||||
|
self.zfs_node.debug("zpool {}: {}".format(self.name, txt))
|
||||||
|
|
||||||
|
|
||||||
|
@cached_property
|
||||||
|
def properties(self):
|
||||||
|
"""all zpool properties"""
|
||||||
|
|
||||||
|
self.debug("Getting zpool properties")
|
||||||
|
|
||||||
|
cmd=[
|
||||||
|
"zpool", "get", "-H", "-p", "all", self.name
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
ret={}
|
||||||
|
|
||||||
|
for pair in self.zfs_node.run(tab_split=True, cmd=cmd, readonly=True, valid_exitcodes=[ 0 ]):
|
||||||
|
if len(pair)==4:
|
||||||
|
ret[pair[1]]=pair[2]
|
||||||
|
|
||||||
|
return(ret)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def features(self):
|
||||||
|
"""get list of active zpool features"""
|
||||||
|
|
||||||
|
ret=[]
|
||||||
|
for (key,value) in self.properties.items():
|
||||||
|
if key.startswith("feature@"):
|
||||||
|
feature=key.split("@")[1]
|
||||||
|
if value=='enabled' or value=='active':
|
||||||
|
ret.append(feature)
|
||||||
|
|
||||||
|
return(ret)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@ -500,7 +547,7 @@ class ZfsDataset():
|
|||||||
|
|
||||||
def __init__(self, zfs_node, name, force_exists=None):
|
def __init__(self, zfs_node, name, force_exists=None):
|
||||||
"""name: full path of the zfs dataset
|
"""name: full path of the zfs dataset
|
||||||
exists: specify if you already know a dataset exists or not. for performance reasons. (otherwise it will have to check with zfs list when needed)
|
exists: specify if you already know a dataset exists or not. for performance and testing reasons. (otherwise it will have to check with zfs list when needed)
|
||||||
"""
|
"""
|
||||||
self.zfs_node=zfs_node
|
self.zfs_node=zfs_node
|
||||||
self.name=name #full name
|
self.name=name #full name
|
||||||
@ -622,7 +669,7 @@ class ZfsDataset():
|
|||||||
@cached_property
|
@cached_property
|
||||||
def exists(self):
|
def exists(self):
|
||||||
"""check if dataset exists.
|
"""check if dataset exists.
|
||||||
Use force to force a specific value to be cached, if you already know. Usefull for performance reasons"""
|
Use force to force a specific value to be cached, if you already know. Useful for performance reasons"""
|
||||||
|
|
||||||
|
|
||||||
if self.force_exists!=None:
|
if self.force_exists!=None:
|
||||||
@ -862,10 +909,9 @@ class ZfsDataset():
|
|||||||
return(self.from_names(names[1:]))
|
return(self.from_names(names[1:]))
|
||||||
|
|
||||||
|
|
||||||
def send_pipe(self, prev_snapshot=None, resume=True, resume_token=None, show_progress=False, raw=False):
|
def send_pipe(self, features, prev_snapshot=None, resume_token=None, show_progress=False, raw=False):
|
||||||
"""returns a pipe with zfs send output for this snapshot
|
"""returns a pipe with zfs send output for this snapshot
|
||||||
|
|
||||||
resume: Use resuming (both sides need to support it)
|
|
||||||
resume_token: resume sending from this token. (in that case we don't need to know snapshot names)
|
resume_token: resume sending from this token. (in that case we don't need to know snapshot names)
|
||||||
|
|
||||||
"""
|
"""
|
||||||
@ -875,11 +921,20 @@ class ZfsDataset():
|
|||||||
cmd.extend(["zfs", "send", ])
|
cmd.extend(["zfs", "send", ])
|
||||||
|
|
||||||
#all kind of performance options:
|
#all kind of performance options:
|
||||||
cmd.append("-L") # large block support
|
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
|
||||||
cmd.append("-e") # WRITE_EMBEDDED, more compact stream
|
cmd.append("-L") # large block support (only if recordsize>128k which is seldomly used)
|
||||||
cmd.append("-c") # use compressed WRITE records
|
|
||||||
if not resume:
|
if 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
|
||||||
cmd.append("-D") # dedupped stream, sends less duplicate data
|
cmd.append("-e") # WRITE_EMBEDDED, more compact stream
|
||||||
|
|
||||||
|
if "-c" in self.zfs_node.supported_send_options:
|
||||||
|
cmd.append("-c") # use compressed WRITE records
|
||||||
|
|
||||||
|
#NOTE: performance is usually worse with this option, according to manual
|
||||||
|
#also -D will be depricated in newer ZFS versions
|
||||||
|
# if not resume:
|
||||||
|
# if "-D" in self.zfs_node.supported_send_options:
|
||||||
|
# cmd.append("-D") # dedupped stream, sends less duplicate data
|
||||||
|
|
||||||
#raw? (for encryption)
|
#raw? (for encryption)
|
||||||
if raw:
|
if raw:
|
||||||
@ -914,11 +969,11 @@ class ZfsDataset():
|
|||||||
return(self.zfs_node.run(cmd, pipe=True))
|
return(self.zfs_node.run(cmd, pipe=True))
|
||||||
|
|
||||||
|
|
||||||
def recv_pipe(self, pipe, resume=True, filter_properties=[], set_properties=[], ignore_exit_code=False):
|
def recv_pipe(self, pipe, features, filter_properties=[], set_properties=[], ignore_exit_code=False):
|
||||||
"""starts a zfs recv for this snapshot and uses pipe as input
|
"""starts a zfs recv for this snapshot and uses pipe as input
|
||||||
|
|
||||||
note: you can also call both a snapshot and filesystem object.
|
note: you can it both on a snapshot or filesystem object.
|
||||||
the resulting zfs command is the same, only our object cache is invalidated differently.
|
The resulting zfs command is the same, only our object cache is invalidated differently.
|
||||||
"""
|
"""
|
||||||
#### build target command
|
#### build target command
|
||||||
cmd=[]
|
cmd=[]
|
||||||
@ -937,8 +992,9 @@ class ZfsDataset():
|
|||||||
#verbose output
|
#verbose output
|
||||||
cmd.append("-v")
|
cmd.append("-v")
|
||||||
|
|
||||||
if resume:
|
if 'extensible_dataset' in features and "-s" in self.zfs_node.supported_recv_options:
|
||||||
#support resuming
|
#support resuming
|
||||||
|
self.debug("Enabled resume support")
|
||||||
cmd.append("-s")
|
cmd.append("-s")
|
||||||
|
|
||||||
cmd.append(self.filesystem_name)
|
cmd.append(self.filesystem_name)
|
||||||
@ -967,7 +1023,7 @@ class ZfsDataset():
|
|||||||
# cmd.append("|mbuffer -m {}".format(args.buffer))
|
# cmd.append("|mbuffer -m {}".format(args.buffer))
|
||||||
|
|
||||||
|
|
||||||
def transfer_snapshot(self, target_snapshot, prev_snapshot=None, resume=True, show_progress=False, filter_properties=[], set_properties=[], ignore_recv_exit_code=False, resume_token=None, raw=False):
|
def transfer_snapshot(self, target_snapshot, features, prev_snapshot=None, show_progress=False, filter_properties=[], set_properties=[], ignore_recv_exit_code=False, resume_token=None, raw=False):
|
||||||
"""transfer this snapshot to target_snapshot. specify prev_snapshot for incremental transfer
|
"""transfer this snapshot to target_snapshot. specify prev_snapshot for incremental transfer
|
||||||
|
|
||||||
connects a send_pipe() to recv_pipe()
|
connects a send_pipe() to recv_pipe()
|
||||||
@ -986,8 +1042,8 @@ class ZfsDataset():
|
|||||||
target_snapshot.verbose("receiving incremental".format(self.snapshot_name))
|
target_snapshot.verbose("receiving incremental".format(self.snapshot_name))
|
||||||
|
|
||||||
#do it
|
#do it
|
||||||
pipe=self.send_pipe(resume=resume, show_progress=show_progress, prev_snapshot=prev_snapshot, resume_token=resume_token, raw=raw)
|
pipe=self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot, resume_token=resume_token, raw=raw)
|
||||||
target_snapshot.recv_pipe(pipe, resume=resume, filter_properties=filter_properties, set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
|
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties, set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
|
||||||
|
|
||||||
def abort_resume(self):
|
def abort_resume(self):
|
||||||
"""abort current resume state"""
|
"""abort current resume state"""
|
||||||
@ -1024,9 +1080,7 @@ class ZfsDataset():
|
|||||||
return(None)
|
return(None)
|
||||||
|
|
||||||
|
|
||||||
|
def thin_list(self, keeps=[], ignores=[]):
|
||||||
|
|
||||||
def thin(self, keeps=[], ignores=[]):
|
|
||||||
"""determines list of snapshots that should be kept or deleted based on the thinning schedule. cull the herd!
|
"""determines list of snapshots that should be kept or deleted based on the thinning schedule. cull the herd!
|
||||||
keep: list of snapshots to always keep (usually the last)
|
keep: list of snapshots to always keep (usually the last)
|
||||||
ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
|
ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
|
||||||
@ -1039,6 +1093,15 @@ class ZfsDataset():
|
|||||||
return(self.zfs_node.thinner.thin(snapshots, keep_objects=keeps))
|
return(self.zfs_node.thinner.thin(snapshots, keep_objects=keeps))
|
||||||
|
|
||||||
|
|
||||||
|
def thin(self):
|
||||||
|
"""destroys snapshots according to thin_list, except last snapshot"""
|
||||||
|
|
||||||
|
(keeps, obsoletes)=self.thin_list(keeps=self.our_snapshots[-1:])
|
||||||
|
for obsolete in obsoletes:
|
||||||
|
obsolete.destroy()
|
||||||
|
self.snapshots.remove(obsolete)
|
||||||
|
|
||||||
|
|
||||||
def find_common_snapshot(self, target_dataset):
|
def find_common_snapshot(self, target_dataset):
|
||||||
"""find latest common snapshot between us and target
|
"""find latest common snapshot between us and target
|
||||||
returns None if its an initial transfer
|
returns None if its an initial transfer
|
||||||
@ -1114,7 +1177,7 @@ class ZfsDataset():
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
def sync_snapshots(self, target_dataset, show_progress=False, resume=True, filter_properties=[], set_properties=[], ignore_recv_exit_code=False, source_holds=True, rollback=False, raw=False, other_snapshots=False, no_send=False, destroy_incompatible=False):
|
def sync_snapshots(self, target_dataset, features, show_progress=False, filter_properties=[], set_properties=[], ignore_recv_exit_code=False, source_holds=True, rollback=False, raw=False, other_snapshots=False, no_send=False, destroy_incompatible=False):
|
||||||
"""sync this dataset's snapshots to target_dataset, while also thinning out old snapshots along the way."""
|
"""sync this dataset's snapshots to target_dataset, while also thinning out old snapshots along the way."""
|
||||||
|
|
||||||
#determine common and start snapshot
|
#determine common and start snapshot
|
||||||
@ -1137,13 +1200,13 @@ class ZfsDataset():
|
|||||||
#now let thinner decide what we want on both sides as final state (after all transfers are done)
|
#now let thinner decide what we want on both sides as final state (after all transfers are done)
|
||||||
self.debug("Create thinning list")
|
self.debug("Create thinning list")
|
||||||
if self.our_snapshots:
|
if self.our_snapshots:
|
||||||
(source_keeps, source_obsoletes)=self.thin(keeps=[self.our_snapshots[-1]])
|
(source_keeps, source_obsoletes)=self.thin_list(keeps=[self.our_snapshots[-1]])
|
||||||
else:
|
else:
|
||||||
source_keeps=[]
|
source_keeps=[]
|
||||||
source_obsoletes=[]
|
source_obsoletes=[]
|
||||||
|
|
||||||
if target_dataset.our_snapshots:
|
if target_dataset.our_snapshots:
|
||||||
(target_keeps, target_obsoletes)=target_dataset.thin(keeps=[target_dataset.our_snapshots[-1]], ignores=incompatible_target_snapshots)
|
(target_keeps, target_obsoletes)=target_dataset.thin_list(keeps=[target_dataset.our_snapshots[-1]], ignores=incompatible_target_snapshots)
|
||||||
else:
|
else:
|
||||||
target_keeps=[]
|
target_keeps=[]
|
||||||
target_obsoletes=[]
|
target_obsoletes=[]
|
||||||
@ -1212,7 +1275,7 @@ class ZfsDataset():
|
|||||||
#does target actually want it?
|
#does target actually want it?
|
||||||
if target_snapshot not in target_obsoletes:
|
if target_snapshot not in target_obsoletes:
|
||||||
( allowed_filter_properties, allowed_set_properties ) = self.get_allowed_properties(filter_properties, set_properties) #NOTE: should we let transfer_snapshot handle this?
|
( allowed_filter_properties, allowed_set_properties ) = self.get_allowed_properties(filter_properties, set_properties) #NOTE: should we let transfer_snapshot handle this?
|
||||||
source_snapshot.transfer_snapshot(target_snapshot, prev_snapshot=prev_source_snapshot, show_progress=show_progress, resume=resume, filter_properties=allowed_filter_properties, set_properties=allowed_set_properties, ignore_recv_exit_code=ignore_recv_exit_code, resume_token=resume_token, raw=raw)
|
source_snapshot.transfer_snapshot(target_snapshot, features=features, prev_snapshot=prev_source_snapshot, show_progress=show_progress, filter_properties=allowed_filter_properties, set_properties=allowed_set_properties, ignore_recv_exit_code=ignore_recv_exit_code, resume_token=resume_token, raw=raw)
|
||||||
resume_token=None
|
resume_token=None
|
||||||
|
|
||||||
#hold the new common snapshots and release the previous ones
|
#hold the new common snapshots and release the previous ones
|
||||||
@ -1226,10 +1289,10 @@ class ZfsDataset():
|
|||||||
|
|
||||||
# we may now destroy the previous source snapshot if its obsolete
|
# we may now destroy the previous source snapshot if its obsolete
|
||||||
if prev_source_snapshot in source_obsoletes:
|
if prev_source_snapshot in source_obsoletes:
|
||||||
prev_source_snapshot.destroy()
|
prev_source_snapshot.destroy()
|
||||||
|
|
||||||
# destroy the previous target snapshot if obsolete (usually this is only the common_snapshot, the rest was already destroyed or will not be send)
|
# destroy the previous target snapshot if obsolete (usually this is only the common_snapshot, the rest was already destroyed or will not be send)
|
||||||
prev_target_snapshot=target_dataset.find_snapshot(common_snapshot)
|
prev_target_snapshot=target_dataset.find_snapshot(prev_source_snapshot)
|
||||||
if prev_target_snapshot in target_obsoletes:
|
if prev_target_snapshot in target_obsoletes:
|
||||||
prev_target_snapshot.destroy()
|
prev_target_snapshot.destroy()
|
||||||
|
|
||||||
@ -1251,14 +1314,14 @@ class ZfsDataset():
|
|||||||
class ZfsNode(ExecuteNode):
|
class ZfsNode(ExecuteNode):
|
||||||
"""a node that contains zfs datasets. implements global (systemwide/pool wide) zfs commands"""
|
"""a node that contains zfs datasets. implements global (systemwide/pool wide) zfs commands"""
|
||||||
|
|
||||||
def __init__(self, backup_name, zfs_autobackup, ssh_config=None, ssh_to=None, readonly=False, description="", debug_output=False, thinner=Thinner()):
|
def __init__(self, backup_name, logger, ssh_config=None, ssh_to=None, readonly=False, description="", debug_output=False, thinner=Thinner()):
|
||||||
self.backup_name=backup_name
|
self.backup_name=backup_name
|
||||||
if not description:
|
if not description and ssh_to:
|
||||||
self.description=ssh_to
|
self.description=ssh_to
|
||||||
else:
|
else:
|
||||||
self.description=description
|
self.description=description
|
||||||
|
|
||||||
self.zfs_autobackup=zfs_autobackup #for logging
|
self.logger=logger
|
||||||
|
|
||||||
if ssh_config:
|
if ssh_config:
|
||||||
self.verbose("Using custom SSH config: {}".format(ssh_config))
|
self.verbose("Using custom SSH config: {}".format(ssh_config))
|
||||||
@ -1277,10 +1340,53 @@ class ZfsNode(ExecuteNode):
|
|||||||
|
|
||||||
self.thinner=thinner
|
self.thinner=thinner
|
||||||
|
|
||||||
|
#list of ZfsPools
|
||||||
|
self.__pools={}
|
||||||
|
|
||||||
ExecuteNode.__init__(self, ssh_config=ssh_config, ssh_to=ssh_to, readonly=readonly, debug_output=debug_output)
|
ExecuteNode.__init__(self, ssh_config=ssh_config, ssh_to=ssh_to, readonly=readonly, debug_output=debug_output)
|
||||||
|
|
||||||
|
|
||||||
|
@cached_property
|
||||||
|
def supported_send_options(self):
|
||||||
|
"""list of supported options, for optimizing sends"""
|
||||||
|
#not every zfs implementation supports them all
|
||||||
|
|
||||||
|
ret=[]
|
||||||
|
for option in ["-L", "-e", "-c" ]:
|
||||||
|
if self.valid_command(["zfs","send", option, "zfs_autobackup_option_test"]):
|
||||||
|
ret.append(option)
|
||||||
|
return(ret)
|
||||||
|
|
||||||
|
@cached_property
|
||||||
|
def supported_recv_options(self):
|
||||||
|
"""list of supported options"""
|
||||||
|
#not every zfs implementation supports them all
|
||||||
|
|
||||||
|
ret=[]
|
||||||
|
for option in ["-s" ]:
|
||||||
|
if self.valid_command(["zfs","recv", option, "zfs_autobackup_option_test"]):
|
||||||
|
ret.append(option)
|
||||||
|
return(ret)
|
||||||
|
|
||||||
|
|
||||||
|
def valid_command(self, cmd):
|
||||||
|
"""test if a specified zfs options are valid exit code. use this to determine support options"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.run(cmd, hide_errors=True, valid_exitcodes=[0,1])
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
#TODO: also create a get_zfs_dataset() function that stores all the objects in a dict. This should optimize caching a bit and is more consistent.
|
||||||
|
def get_zfs_pool(self, name):
|
||||||
|
"""get a ZfsPool() object from specified name. stores objects internally to enable caching"""
|
||||||
|
|
||||||
|
return(self.__pools.setdefault(name, ZfsPool(self, name)))
|
||||||
|
|
||||||
|
|
||||||
def reset_progress(self):
|
def reset_progress(self):
|
||||||
"""reset progress output counters"""
|
"""reset progress output counters"""
|
||||||
self._progress_total_bytes=0
|
self._progress_total_bytes=0
|
||||||
@ -1302,7 +1408,7 @@ class ZfsNode(ExecuteNode):
|
|||||||
#always output for debugging offcourse
|
#always output for debugging offcourse
|
||||||
self.debug(prefix+line.rstrip())
|
self.debug(prefix+line.rstrip())
|
||||||
|
|
||||||
#actual usefull info
|
#actual useful info
|
||||||
if len(progress_fields)>=3:
|
if len(progress_fields)>=3:
|
||||||
if progress_fields[0]=='full' or progress_fields[0]=='size':
|
if progress_fields[0]=='full' or progress_fields[0]=='size':
|
||||||
self._progress_total_bytes=int(progress_fields[2])
|
self._progress_total_bytes=int(progress_fields[2])
|
||||||
@ -1317,8 +1423,8 @@ class ZfsNode(ExecuteNode):
|
|||||||
bytes_left=self._progress_total_bytes-bytes
|
bytes_left=self._progress_total_bytes-bytes
|
||||||
minutes_left=int((bytes_left/(bytes/(time.time()-self._progress_start_time)))/60)
|
minutes_left=int((bytes_left/(bytes/(time.time()-self._progress_start_time)))/60)
|
||||||
|
|
||||||
print(">>> {}% {}MB/s (total {}MB, {} minutes left) \r".format(percentage, speed, int(self._progress_total_bytes/(1024*1024)), minutes_left), end='')
|
print(">>> {}% {}MB/s (total {}MB, {} minutes left) \r".format(percentage, speed, int(self._progress_total_bytes/(1024*1024)), minutes_left), end='', file=sys.stderr)
|
||||||
sys.stdout.flush()
|
sys.stderr.flush()
|
||||||
|
|
||||||
return
|
return
|
||||||
|
|
||||||
@ -1336,13 +1442,13 @@ class ZfsNode(ExecuteNode):
|
|||||||
self.parse_zfs_progress(line, hide_errors, "STDERR > ")
|
self.parse_zfs_progress(line, hide_errors, "STDERR > ")
|
||||||
|
|
||||||
def verbose(self,txt):
|
def verbose(self,txt):
|
||||||
self.zfs_autobackup.verbose("{} {}".format(self.description, txt))
|
self.logger.verbose("{} {}".format(self.description, txt))
|
||||||
|
|
||||||
def error(self,txt,titles=[]):
|
def error(self,txt,titles=[]):
|
||||||
self.zfs_autobackup.error("{} {}".format(self.description, txt))
|
self.logger.error("{} {}".format(self.description, txt))
|
||||||
|
|
||||||
def debug(self,txt, titles=[]):
|
def debug(self,txt, titles=[]):
|
||||||
self.zfs_autobackup.debug("{} {}".format(self.description, txt))
|
self.logger.debug("{} {}".format(self.description, txt))
|
||||||
|
|
||||||
def new_snapshotname(self):
|
def new_snapshotname(self):
|
||||||
"""determine uniq new snapshotname"""
|
"""determine uniq new snapshotname"""
|
||||||
@ -1370,7 +1476,7 @@ class ZfsNode(ExecuteNode):
|
|||||||
|
|
||||||
pools[pool].append(snapshot)
|
pools[pool].append(snapshot)
|
||||||
|
|
||||||
#add snapshot to cache (also usefull in testmode)
|
#add snapshot to cache (also useful in testmode)
|
||||||
dataset.snapshots.append(snapshot) #NOTE: this will trigger zfs list
|
dataset.snapshots.append(snapshot) #NOTE: this will trigger zfs list
|
||||||
|
|
||||||
if not pools:
|
if not pools:
|
||||||
@ -1394,6 +1500,9 @@ class ZfsNode(ExecuteNode):
|
|||||||
|
|
||||||
returns: list of ZfsDataset
|
returns: list of ZfsDataset
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
self.debug("Getting selected datasets")
|
||||||
|
|
||||||
#get all source filesystems that have the backup property
|
#get all source filesystems that have the backup property
|
||||||
lines=self.run(tab_split=True, readonly=True, cmd=[
|
lines=self.run(tab_split=True, readonly=True, cmd=[
|
||||||
"zfs", "get", "-t", "volume,filesystem", "-o", "name,value,source", "-s", "local,inherited", "-H", "autobackup:"+self.backup_name
|
"zfs", "get", "-t", "volume,filesystem", "-o", "name,value,source", "-s", "local,inherited", "-H", "autobackup:"+self.backup_name
|
||||||
@ -1429,14 +1538,9 @@ class ZfsNode(ExecuteNode):
|
|||||||
return(selected_filesystems)
|
return(selected_filesystems)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class ZfsAutobackup:
|
class ZfsAutobackup:
|
||||||
"""main class"""
|
"""main class"""
|
||||||
def __init__(self):
|
def __init__(self,argv):
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(
|
parser = argparse.ArgumentParser(
|
||||||
description=HEADER,
|
description=HEADER,
|
||||||
@ -1451,17 +1555,17 @@ class ZfsAutobackup:
|
|||||||
parser.add_argument('target_path', help='Target ZFS filesystem')
|
parser.add_argument('target_path', help='Target ZFS filesystem')
|
||||||
|
|
||||||
parser.add_argument('--other-snapshots', action='store_true', help='Send over other snapshots as well, not just the ones created by this tool.')
|
parser.add_argument('--other-snapshots', action='store_true', help='Send over other snapshots as well, not just the ones created by this tool.')
|
||||||
parser.add_argument('--no-snapshot', action='store_true', help='Don\'t create new snapshots (usefull for finishing uncompleted backups, or cleanups)')
|
parser.add_argument('--no-snapshot', action='store_true', help='Don\'t create new snapshots (useful for finishing uncompleted backups, or cleanups)')
|
||||||
parser.add_argument('--no-send', action='store_true', help='Don\'t send snapshots (usefull for cleanups, or if you want a serperate send-cronjob)')
|
parser.add_argument('--no-send', action='store_true', help='Don\'t send snapshots (useful for cleanups, or if you want a serperate send-cronjob)')
|
||||||
parser.add_argument('--min-change', type=int, default=1, help='Number of bytes written after which we consider a dataset changed (default %(default)s)')
|
parser.add_argument('--min-change', type=int, default=1, help='Number of bytes written after which we consider a dataset changed (default %(default)s)')
|
||||||
parser.add_argument('--allow-empty', action='store_true', help='If nothing has changed, still create empty snapshots. (same as --min-change=0)')
|
parser.add_argument('--allow-empty', action='store_true', help='If nothing has changed, still create empty snapshots. (same as --min-change=0)')
|
||||||
parser.add_argument('--ignore-replicated', action='store_true', help='Ignore datasets that seem to be replicated some other way. (No changes since lastest snapshot. Usefull for proxmox HA replication)')
|
parser.add_argument('--ignore-replicated', action='store_true', help='Ignore datasets that seem to be replicated some other way. (No changes since lastest snapshot. Useful for proxmox HA replication)')
|
||||||
parser.add_argument('--no-holds', action='store_true', help='Don\'t lock snapshots on the source. (Usefull to allow proxmox HA replication to switches nodes)')
|
parser.add_argument('--no-holds', action='store_true', help='Don\'t lock snapshots on the source. (Useful to allow proxmox HA replication to switches nodes)')
|
||||||
#not sure if this ever was usefull:
|
#not sure if this ever was useful:
|
||||||
# parser.add_argument('--ignore-new', action='store_true', help='Ignore filesystem if there are already newer snapshots for it on the target (use with caution)')
|
# parser.add_argument('--ignore-new', action='store_true', help='Ignore filesystem if there are already newer snapshots for it on the target (use with caution)')
|
||||||
|
|
||||||
parser.add_argument('--resume', action='store_true', help='Support resuming of interrupted transfers by using the zfs extensible_dataset feature (both zpools should have it enabled) Disadvantage is that you need to use zfs recv -A if another snapshot is created on the target during a receive. Otherwise it will keep failing.')
|
parser.add_argument('--resume', action='store_true', help=argparse.SUPPRESS)
|
||||||
parser.add_argument('--strip-path', default=0, type=int, help='Number of directory to strip from path (use 1 when cloning zones between 2 SmartOS machines)')
|
parser.add_argument('--strip-path', default=0, type=int, help='Number of directories to strip from target path (use 1 when cloning zones between 2 SmartOS machines)')
|
||||||
# parser.add_argument('--buffer', default="", help='Use mbuffer with specified size to speedup zfs transfer. (e.g. --buffer 1G) Will also show nice progress output.')
|
# parser.add_argument('--buffer', default="", help='Use mbuffer with specified size to speedup zfs transfer. (e.g. --buffer 1G) Will also show nice progress output.')
|
||||||
|
|
||||||
|
|
||||||
@ -1472,7 +1576,7 @@ class ZfsAutobackup:
|
|||||||
parser.add_argument('--set-properties', type=str, help='List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S)')
|
parser.add_argument('--set-properties', type=str, help='List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S)')
|
||||||
parser.add_argument('--rollback', action='store_true', help='Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on)')
|
parser.add_argument('--rollback', action='store_true', help='Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on)')
|
||||||
parser.add_argument('--destroy-incompatible', action='store_true', help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
|
parser.add_argument('--destroy-incompatible', action='store_true', help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
|
||||||
parser.add_argument('--ignore-transfer-errors', action='store_true', help='Ignore transfer errors (still checks if received filesystem exists. usefull for acltype errors)')
|
parser.add_argument('--ignore-transfer-errors', action='store_true', help='Ignore transfer errors (still checks if received filesystem exists. useful for acltype errors)')
|
||||||
parser.add_argument('--raw', action='store_true', help='For encrypted datasets, send data exactly as it exists on disk.')
|
parser.add_argument('--raw', action='store_true', help='For encrypted datasets, send data exactly as it exists on disk.')
|
||||||
|
|
||||||
|
|
||||||
@ -1480,13 +1584,16 @@ class ZfsAutobackup:
|
|||||||
parser.add_argument('--verbose', action='store_true', help='verbose output')
|
parser.add_argument('--verbose', action='store_true', help='verbose output')
|
||||||
parser.add_argument('--debug', action='store_true', help='Show zfs commands that are executed, stops after an exception.')
|
parser.add_argument('--debug', action='store_true', help='Show zfs commands that are executed, stops after an exception.')
|
||||||
parser.add_argument('--debug-output', action='store_true', help='Show zfs commands and their output/exit codes. (noisy)')
|
parser.add_argument('--debug-output', action='store_true', help='Show zfs commands and their output/exit codes. (noisy)')
|
||||||
parser.add_argument('--progress', action='store_true', help='show zfs progress output (to stderr)')
|
parser.add_argument('--progress', action='store_true', help='show zfs progress output (to stderr). Enabled by default on ttys.')
|
||||||
|
|
||||||
#note args is the only global variable we use, since its a global readonly setting anyway
|
#note args is the only global variable we use, since its a global readonly setting anyway
|
||||||
args = parser.parse_args()
|
args = parser.parse_args(argv)
|
||||||
|
|
||||||
self.args=args
|
self.args=args
|
||||||
|
|
||||||
|
if sys.stderr.isatty():
|
||||||
|
args.progress=True
|
||||||
|
|
||||||
if args.debug_output:
|
if args.debug_output:
|
||||||
args.debug=True
|
args.debug=True
|
||||||
|
|
||||||
@ -1501,6 +1608,9 @@ class ZfsAutobackup:
|
|||||||
|
|
||||||
self.log=Log(show_debug=self.args.debug, show_verbose=self.args.verbose)
|
self.log=Log(show_debug=self.args.debug, show_verbose=self.args.verbose)
|
||||||
|
|
||||||
|
if args.resume:
|
||||||
|
self.verbose("NOTE: The --resume option isn't needed anymore (its autodetected now)")
|
||||||
|
|
||||||
|
|
||||||
def verbose(self,txt,titles=[]):
|
def verbose(self,txt,titles=[]):
|
||||||
self.log.verbose(txt)
|
self.log.verbose(txt)
|
||||||
@ -1517,106 +1627,134 @@ class ZfsAutobackup:
|
|||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
|
|
||||||
self.verbose (HEADER)
|
try:
|
||||||
|
self.verbose (HEADER)
|
||||||
|
|
||||||
if self.args.test:
|
if self.args.test:
|
||||||
self.verbose("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
|
self.verbose("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
|
||||||
|
|
||||||
self.set_title("Settings summary")
|
self.set_title("Settings summary")
|
||||||
|
|
||||||
description="[Source]"
|
description="[Source]"
|
||||||
source_thinner=Thinner(self.args.keep_source)
|
source_thinner=Thinner(self.args.keep_source)
|
||||||
source_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_source, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=source_thinner)
|
source_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_source, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=source_thinner)
|
||||||
source_node.verbose("Send all datasets that have 'autobackup:{}=true' or 'autobackup:{}=child'".format(self.args.backup_name, self.args.backup_name))
|
source_node.verbose("Send all datasets that have 'autobackup:{}=true' or 'autobackup:{}=child'".format(self.args.backup_name, self.args.backup_name))
|
||||||
|
|
||||||
self.verbose("")
|
self.verbose("")
|
||||||
|
|
||||||
description="[Target]"
|
description="[Target]"
|
||||||
target_thinner=Thinner(self.args.keep_target)
|
target_thinner=Thinner(self.args.keep_target)
|
||||||
target_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_target, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=target_thinner)
|
target_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_target, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=target_thinner)
|
||||||
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
||||||
|
|
||||||
self.set_title("Selecting")
|
|
||||||
selected_source_datasets=source_node.selected_datasets
|
self.set_title("Selecting")
|
||||||
if not selected_source_datasets:
|
selected_source_datasets=source_node.selected_datasets
|
||||||
self.error("No source filesystems selected, please do a 'zfs set autobackup:{0}=true' on the source datasets you want to backup.".format(self.args.backup_name))
|
if not selected_source_datasets:
|
||||||
|
self.error("No source filesystems selected, please do a 'zfs set autobackup:{0}=true' on the source datasets you want to backup.".format(self.args.backup_name))
|
||||||
|
return(255)
|
||||||
|
|
||||||
|
source_datasets=[]
|
||||||
|
|
||||||
|
|
||||||
|
#filter out already replicated stuff?
|
||||||
|
if not self.args.ignore_replicated:
|
||||||
|
source_datasets=selected_source_datasets
|
||||||
|
else:
|
||||||
|
self.set_title("Filtering already replicated filesystems")
|
||||||
|
for selected_source_dataset in selected_source_datasets:
|
||||||
|
if selected_source_dataset.is_changed(self.args.min_change):
|
||||||
|
source_datasets.append(selected_source_dataset)
|
||||||
|
else:
|
||||||
|
selected_source_dataset.verbose("Ignoring, already replicated")
|
||||||
|
|
||||||
|
|
||||||
|
if not self.args.no_snapshot:
|
||||||
|
self.set_title("Snapshotting")
|
||||||
|
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), min_changed_bytes=self.args.min_change)
|
||||||
|
|
||||||
|
|
||||||
|
if self.args.no_send:
|
||||||
|
self.set_title("Thinning")
|
||||||
|
else:
|
||||||
|
self.set_title("Sending and thinning")
|
||||||
|
|
||||||
|
if self.args.filter_properties:
|
||||||
|
filter_properties=self.args.filter_properties.split(",")
|
||||||
|
else:
|
||||||
|
filter_properties=[]
|
||||||
|
|
||||||
|
if self.args.set_properties:
|
||||||
|
set_properties=self.args.set_properties.split(",")
|
||||||
|
else:
|
||||||
|
set_properties=[]
|
||||||
|
|
||||||
|
if self.args.clear_refreservation:
|
||||||
|
filter_properties.append("refreservation")
|
||||||
|
|
||||||
|
if self.args.clear_mountpoint:
|
||||||
|
set_properties.append("canmount=noauto")
|
||||||
|
|
||||||
|
#sync datasets
|
||||||
|
fail_count=0
|
||||||
|
target_datasets=[]
|
||||||
|
for source_dataset in source_datasets:
|
||||||
|
|
||||||
|
try:
|
||||||
|
#determine corresponding target_dataset
|
||||||
|
target_name=self.args.target_path + "/" + source_dataset.lstrip_path(self.args.strip_path)
|
||||||
|
target_dataset=ZfsDataset(target_node, target_name)
|
||||||
|
target_datasets.append(target_dataset)
|
||||||
|
|
||||||
|
#ensure parents exists
|
||||||
|
#TODO: this isnt perfect yet, in some cases it can create parents when it shouldn't.
|
||||||
|
if not self.args.no_send and not target_dataset.parent in target_datasets and not target_dataset.parent.exists:
|
||||||
|
target_dataset.parent.create_filesystem(parents=True)
|
||||||
|
|
||||||
|
#determine common zpool features
|
||||||
|
source_features=source_node.get_zfs_pool(source_dataset.split_path()[0]).features
|
||||||
|
target_features=target_node.get_zfs_pool(target_dataset.split_path()[0]).features
|
||||||
|
common_features=source_features and target_features
|
||||||
|
# source_dataset.debug("Common features: {}".format(common_features))
|
||||||
|
|
||||||
|
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress, features=common_features, filter_properties=filter_properties, set_properties=set_properties, ignore_recv_exit_code=self.args.ignore_transfer_errors, source_holds= not self.args.no_holds, rollback=self.args.rollback, raw=self.args.raw, other_snapshots=self.args.other_snapshots, no_send=self.args.no_send, destroy_incompatible=self.args.destroy_incompatible)
|
||||||
|
except Exception as e:
|
||||||
|
fail_count=fail_count+1
|
||||||
|
self.error("DATASET FAILED: "+str(e))
|
||||||
|
if self.args.debug:
|
||||||
|
raise
|
||||||
|
|
||||||
|
#also thin target_datasets that are not on the source any more
|
||||||
|
self.debug("Thinning obsolete datasets")
|
||||||
|
for dataset in ZfsDataset(target_node, self.args.target_path).recursive_datasets:
|
||||||
|
if dataset not in target_datasets:
|
||||||
|
dataset.debug("Missing on source")
|
||||||
|
dataset.thin()
|
||||||
|
|
||||||
|
|
||||||
|
if not fail_count:
|
||||||
|
if self.args.test:
|
||||||
|
self.set_title("All tests successfull.")
|
||||||
|
else:
|
||||||
|
self.set_title("All backups completed successfully")
|
||||||
|
else:
|
||||||
|
self.error("{} datasets failed!".format(fail_count))
|
||||||
|
|
||||||
|
if self.args.test:
|
||||||
|
self.verbose("TEST MODE - DID NOT MAKE ANY BACKUPS!")
|
||||||
|
|
||||||
|
return(fail_count)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.error("Exception: "+str(e))
|
||||||
|
if self.args.debug:
|
||||||
|
raise
|
||||||
|
return(255)
|
||||||
|
except KeyboardInterrupt as e:
|
||||||
|
self.error("Aborted")
|
||||||
return(255)
|
return(255)
|
||||||
|
|
||||||
source_datasets=[]
|
|
||||||
|
|
||||||
#filter out already replicated stuff?
|
|
||||||
if not self.args.ignore_replicated:
|
|
||||||
source_datasets=selected_source_datasets
|
|
||||||
else:
|
|
||||||
self.set_title("Filtering already replicated filesystems")
|
|
||||||
for selected_source_dataset in selected_source_datasets:
|
|
||||||
if selected_source_dataset.is_changed(self.args.min_change):
|
|
||||||
source_datasets.append(selected_source_dataset)
|
|
||||||
else:
|
|
||||||
selected_source_dataset.verbose("Ignoring, already replicated")
|
|
||||||
|
|
||||||
|
|
||||||
if not self.args.no_snapshot:
|
|
||||||
self.set_title("Snapshotting")
|
|
||||||
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), min_changed_bytes=self.args.min_change)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if self.args.no_send:
|
|
||||||
self.set_title("Thinning")
|
|
||||||
else:
|
|
||||||
self.set_title("Sending and thinning")
|
|
||||||
|
|
||||||
if self.args.filter_properties:
|
|
||||||
filter_properties=self.args.filter_properties.split(",")
|
|
||||||
else:
|
|
||||||
filter_properties=[]
|
|
||||||
|
|
||||||
if self.args.set_properties:
|
|
||||||
set_properties=self.args.set_properties.split(",")
|
|
||||||
else:
|
|
||||||
set_properties=[]
|
|
||||||
|
|
||||||
if self.args.clear_refreservation:
|
|
||||||
filter_properties.append("refreservation")
|
|
||||||
|
|
||||||
if self.args.clear_mountpoint:
|
|
||||||
set_properties.append("canmount=noauto")
|
|
||||||
|
|
||||||
fail_count=0
|
|
||||||
for source_dataset in source_datasets:
|
|
||||||
|
|
||||||
try:
|
|
||||||
#determine corresponding target_dataset
|
|
||||||
target_name=self.args.target_path + "/" + source_dataset.lstrip_path(self.args.strip_path)
|
|
||||||
target_dataset=ZfsDataset(target_node, target_name)
|
|
||||||
|
|
||||||
#ensure parents exists
|
|
||||||
if not self.args.no_send and not target_dataset.parent.exists:
|
|
||||||
target_dataset.parent.create_filesystem(parents=True)
|
|
||||||
|
|
||||||
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress, resume=self.args.resume, filter_properties=filter_properties, set_properties=set_properties, ignore_recv_exit_code=self.args.ignore_transfer_errors, source_holds= not self.args.no_holds, rollback=self.args.rollback, raw=self.args.raw, other_snapshots=self.args.other_snapshots, no_send=self.args.no_send, destroy_incompatible=self.args.destroy_incompatible)
|
|
||||||
except Exception as e:
|
|
||||||
fail_count=fail_count+1
|
|
||||||
self.error("DATASET FAILED: "+str(e))
|
|
||||||
if self.args.debug:
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if not fail_count:
|
|
||||||
if self.args.test:
|
|
||||||
self.set_title("All tests successfull.")
|
|
||||||
else:
|
|
||||||
self.set_title("All backups completed successfully")
|
|
||||||
else:
|
|
||||||
self.error("{} datasets failed!".format(fail_count))
|
|
||||||
|
|
||||||
if self.args.test:
|
|
||||||
self.verbose("TEST MODE - DID NOT MAKE ANY BACKUPS!")
|
|
||||||
|
|
||||||
return(fail_count)
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
zfs_autobackup=ZfsAutobackup()
|
zfs_autobackup=ZfsAutobackup(sys.argv[1:])
|
||||||
sys.exit(zfs_autobackup.run())
|
sys.exit(zfs_autobackup.run())
|
||||||
|
|||||||
17
ngrok.sh
Executable file
17
ngrok.sh
Executable file
@ -0,0 +1,17 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
if ! [ -e ngrok ]; then
|
||||||
|
wget -O ngrok.zip https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
|
||||||
|
unzip ngrok.zip
|
||||||
|
fi
|
||||||
|
{
|
||||||
|
mkfifo pipe
|
||||||
|
echo "Executing nc"
|
||||||
|
nc -k -l -v 8888 <pipe | ( while true; do bash >pipe 2>&1; echo "restarting" ;sleep 1; done )
|
||||||
|
killall -SIGINT ngrok && echo "ngrok terminated"
|
||||||
|
} &
|
||||||
|
{
|
||||||
|
echo "Executing ngrok"
|
||||||
|
./ngrok authtoken $NGROK_TOKEN
|
||||||
|
./ngrok tcp 8888 --log=stdout
|
||||||
|
} &
|
||||||
|
wait
|
||||||
6
requirements.txt
Normal file
6
requirements.txt
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
colorama
|
||||||
|
argparse
|
||||||
|
coverage==4.5.4
|
||||||
|
python-coveralls
|
||||||
|
unittest2
|
||||||
|
mock
|
||||||
31
run_tests
Executable file
31
run_tests
Executable file
@ -0,0 +1,31 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
|
||||||
|
if [ "$USER" != "root" ]; then
|
||||||
|
echo "Need root to do proper zfs testing"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
#reactivate python environment, if any (usefull in Travis)
|
||||||
|
[ "$VIRTUAL_ENV" ] && source $VIRTUAL_ENV/bin/activate
|
||||||
|
|
||||||
|
# test needs ssh access to localhost for testing
|
||||||
|
if ! [ -e /root/.ssh/id_rsa ]; then
|
||||||
|
ssh-keygen -t rsa -f /root/.ssh/id_rsa -P '' || exit 1
|
||||||
|
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys || exit 1
|
||||||
|
ssh -oStrictHostKeyChecking=no localhost true || exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
coverage run --source bin.zfs_autobackup -m unittest discover -vv
|
||||||
|
EXIT=$?
|
||||||
|
|
||||||
|
echo
|
||||||
|
coverage report
|
||||||
|
|
||||||
|
#this does automatic travis CI/https://coveralls.io/ intergration:
|
||||||
|
if which coveralls > /dev/null; then
|
||||||
|
echo "Submitting to coveralls.io:"
|
||||||
|
coveralls
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit $EXIT
|
||||||
135
test_executenode.py
Normal file
135
test_executenode.py
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
from basetest import *
|
||||||
|
|
||||||
|
|
||||||
|
print("THIS TEST REQUIRES SSH TO LOCALHOST")
|
||||||
|
|
||||||
|
class TestExecuteNode(unittest2.TestCase):
|
||||||
|
|
||||||
|
# def setUp(self):
|
||||||
|
|
||||||
|
# return super().setUp()
|
||||||
|
|
||||||
|
def basics(self, node ):
|
||||||
|
|
||||||
|
with self.subTest("simple echo"):
|
||||||
|
self.assertEqual(node.run(["echo","test"]), ["test"])
|
||||||
|
|
||||||
|
with self.subTest("error exit code"):
|
||||||
|
with self.assertRaises(subprocess.CalledProcessError):
|
||||||
|
node.run(["false"])
|
||||||
|
|
||||||
|
#
|
||||||
|
with self.subTest("multiline without tabsplit"):
|
||||||
|
self.assertEqual(node.run(["echo","l1c1\tl1c2\nl2c1\tl2c2"], tab_split=False), ["l1c1\tl1c2", "l2c1\tl2c2"])
|
||||||
|
|
||||||
|
#multiline tabsplit
|
||||||
|
with self.subTest("multiline tabsplit"):
|
||||||
|
self.assertEqual(node.run(["echo","l1c1\tl1c2\nl2c1\tl2c2"], tab_split=True), [['l1c1', 'l1c2'], ['l2c1', 'l2c2']])
|
||||||
|
|
||||||
|
#escaping test (shouldnt be a problem locally, single quotes can be a problem remote via ssh)
|
||||||
|
with self.subTest("escape test"):
|
||||||
|
s="><`'\"@&$()$bla\\/.*!#test _+-={}[]|"
|
||||||
|
self.assertEqual(node.run(["echo",s]), [s])
|
||||||
|
|
||||||
|
#return std err as well, trigger stderr by listing something non existing
|
||||||
|
with self.subTest("stderr return"):
|
||||||
|
(stdout, stderr)=node.run(["ls", "nonexistingfile"], return_stderr=True, valid_exitcodes=[2])
|
||||||
|
self.assertEqual(stdout,[])
|
||||||
|
self.assertRegex(stderr[0],"nonexistingfile")
|
||||||
|
|
||||||
|
#slow command, make sure things dont exit too early
|
||||||
|
with self.subTest("early exit test"):
|
||||||
|
start_time=time.time()
|
||||||
|
self.assertEqual(node.run(["sleep","1"]), [])
|
||||||
|
self.assertGreaterEqual(time.time()-start_time,1)
|
||||||
|
|
||||||
|
#input a string and check it via cat
|
||||||
|
with self.subTest("stdin input string"):
|
||||||
|
self.assertEqual(node.run(["cat"], input="test"), ["test"])
|
||||||
|
|
||||||
|
#command that wants input, while we dont have input, shouldnt hang forever.
|
||||||
|
with self.subTest("stdin process with input=None (shouldn't hang)"):
|
||||||
|
self.assertEqual(node.run(["cat"]), [])
|
||||||
|
|
||||||
|
def test_basics_local(self):
|
||||||
|
node=ExecuteNode(debug_output=True)
|
||||||
|
self.basics(node)
|
||||||
|
|
||||||
|
def test_basics_remote(self):
|
||||||
|
node=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||||
|
self.basics(node)
|
||||||
|
|
||||||
|
################
|
||||||
|
|
||||||
|
def test_readonly(self):
|
||||||
|
node=ExecuteNode(debug_output=True, readonly=True)
|
||||||
|
|
||||||
|
self.assertEqual(node.run(["echo","test"], readonly=False), None)
|
||||||
|
self.assertEqual(node.run(["echo","test"], readonly=True), ["test"])
|
||||||
|
|
||||||
|
|
||||||
|
################
|
||||||
|
|
||||||
|
def pipe(self, nodea, nodeb):
|
||||||
|
|
||||||
|
with self.subTest("pipe data"):
|
||||||
|
output=nodea.run(["dd", "if=/dev/zero", "count=1000"], pipe=True)
|
||||||
|
self.assertEqual(nodeb.run(["md5sum"], input=output), ["816df6f64deba63b029ca19d880ee10a -"])
|
||||||
|
|
||||||
|
with self.subTest("exit code both ends of pipe ok"):
|
||||||
|
output=nodea.run(["true"], pipe=True)
|
||||||
|
nodeb.run(["true"], input=output)
|
||||||
|
|
||||||
|
with self.subTest("error on pipe input side"):
|
||||||
|
with self.assertRaises(subprocess.CalledProcessError):
|
||||||
|
output=nodea.run(["false"], pipe=True)
|
||||||
|
nodeb.run(["true"], input=output)
|
||||||
|
|
||||||
|
with self.subTest("error on pipe output side "):
|
||||||
|
with self.assertRaises(subprocess.CalledProcessError):
|
||||||
|
output=nodea.run(["true"], pipe=True)
|
||||||
|
nodeb.run(["false"], input=output)
|
||||||
|
|
||||||
|
with self.subTest("error on both sides of pipe"):
|
||||||
|
with self.assertRaises(subprocess.CalledProcessError):
|
||||||
|
output=nodea.run(["false"], pipe=True)
|
||||||
|
nodeb.run(["false"], input=output)
|
||||||
|
|
||||||
|
with self.subTest("check stderr on pipe output side"):
|
||||||
|
output=nodea.run(["true"], pipe=True)
|
||||||
|
(stdout, stderr)=nodeb.run(["ls", "nonexistingfile"], input=output, return_stderr=True, valid_exitcodes=[0,2])
|
||||||
|
self.assertEqual(stdout,[])
|
||||||
|
self.assertRegex(stderr[0], "nonexistingfile" )
|
||||||
|
|
||||||
|
with self.subTest("check stderr on pipe input side (should be only printed)"):
|
||||||
|
output=nodea.run(["ls", "nonexistingfile"], pipe=True)
|
||||||
|
(stdout, stderr)=nodeb.run(["true"], input=output, return_stderr=True, valid_exitcodes=[0,2])
|
||||||
|
self.assertEqual(stdout,[])
|
||||||
|
self.assertEqual(stderr,[] )
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def test_pipe_local_local(self):
|
||||||
|
nodea=ExecuteNode(debug_output=True)
|
||||||
|
nodeb=ExecuteNode(debug_output=True)
|
||||||
|
self.pipe(nodea, nodeb)
|
||||||
|
|
||||||
|
def test_pipe_remote_remote(self):
|
||||||
|
nodea=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||||
|
nodeb=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||||
|
self.pipe(nodea, nodeb)
|
||||||
|
|
||||||
|
def test_pipe_local_remote(self):
|
||||||
|
nodea=ExecuteNode(debug_output=True)
|
||||||
|
nodeb=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||||
|
self.pipe(nodea, nodeb)
|
||||||
|
|
||||||
|
def test_pipe_remote_local(self):
|
||||||
|
nodea=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||||
|
nodeb=ExecuteNode(debug_output=True)
|
||||||
|
self.pipe(nodea, nodeb)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
275
test_externalfailures.py
Normal file
275
test_externalfailures.py
Normal file
@ -0,0 +1,275 @@
|
|||||||
|
|
||||||
|
from basetest import *
|
||||||
|
|
||||||
|
|
||||||
|
class TestZfsNode(unittest2.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
prepare_zpools()
|
||||||
|
self.longMessage=True
|
||||||
|
|
||||||
|
# generate a resumable state
|
||||||
|
#NOTE: this generates two resumable test_target1/test_source1/fs1 and test_target1/test_source1/fs1/sub
|
||||||
|
def generate_resume(self):
|
||||||
|
|
||||||
|
r=shelltest("zfs set compress=off test_source1 test_target1")
|
||||||
|
|
||||||
|
#big change on source
|
||||||
|
r=shelltest("dd if=/dev/zero of=/test_source1/fs1/data bs=250M count=1")
|
||||||
|
|
||||||
|
#waste space on target
|
||||||
|
r=shelltest("dd if=/dev/zero of=/test_target1/waste bs=250M count=1")
|
||||||
|
|
||||||
|
#should fail and leave resume token (if supported)
|
||||||
|
self.assertTrue(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
#free up space
|
||||||
|
r=shelltest("rm /test_target1/waste")
|
||||||
|
#sync
|
||||||
|
r=shelltest("zfs umount test_target1")
|
||||||
|
r=shelltest("zfs mount test_target1")
|
||||||
|
|
||||||
|
|
||||||
|
#resume initial backup
|
||||||
|
def test_initial_resume(self):
|
||||||
|
|
||||||
|
#inital backup, leaves resume token
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.generate_resume()
|
||||||
|
|
||||||
|
#--test should resume and succeed
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
|
||||||
|
#did we really resume?
|
||||||
|
if "0.6.5" in ZFS_USERSPACE:
|
||||||
|
#abort this late, for beter coverage
|
||||||
|
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||||
|
else:
|
||||||
|
self.assertIn(": resuming", buf.getvalue())
|
||||||
|
|
||||||
|
|
||||||
|
#should resume and succeed
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
|
||||||
|
#did we really resume?
|
||||||
|
if "0.6.5" in ZFS_USERSPACE:
|
||||||
|
#abort this late, for beter coverage
|
||||||
|
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||||
|
else:
|
||||||
|
self.assertIn(": resuming", buf.getvalue())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
#resume incremental backup
|
||||||
|
def test_incremental_resume(self):
|
||||||
|
|
||||||
|
#initial backup
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
#incremental backup leaves resume token
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.generate_resume()
|
||||||
|
|
||||||
|
#--test should resume and succeed
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
|
||||||
|
#did we really resume?
|
||||||
|
if "0.6.5" in ZFS_USERSPACE:
|
||||||
|
#abort this late, for beter coverage
|
||||||
|
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||||
|
else:
|
||||||
|
self.assertIn(": resuming", buf.getvalue())
|
||||||
|
|
||||||
|
#should resume and succeed
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
|
||||||
|
#did we really resume?
|
||||||
|
if "0.6.5" in ZFS_USERSPACE:
|
||||||
|
#abort this late, for beter coverage
|
||||||
|
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||||
|
else:
|
||||||
|
self.assertIn(": resuming", buf.getvalue())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
# generate an invalid resume token, and verify if its aborted automaticly
|
||||||
|
def test_initial_resumeabort(self):
|
||||||
|
|
||||||
|
if "0.6.5" in ZFS_USERSPACE:
|
||||||
|
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||||
|
|
||||||
|
#inital backup, leaves resume token
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.generate_resume()
|
||||||
|
|
||||||
|
#remove corresponding source snapshot, so it becomes invalid
|
||||||
|
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
||||||
|
|
||||||
|
#NOTE: it can only abort the initial dataset if it has no subs
|
||||||
|
shelltest("zfs destroy test_target1/test_source1/fs1/sub; true")
|
||||||
|
|
||||||
|
#--test try again, should abort old resume
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
|
#try again, should abort old resume
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
# generate an invalid resume token, and verify if its aborted automaticly
|
||||||
|
def test_incremental_resumeabort(self):
|
||||||
|
|
||||||
|
if "0.6.5" in ZFS_USERSPACE:
|
||||||
|
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||||
|
|
||||||
|
#initial backup
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
#icremental backup, leaves resume token
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.generate_resume()
|
||||||
|
|
||||||
|
#remove corresponding source snapshot, so it becomes invalid
|
||||||
|
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
|
||||||
|
|
||||||
|
#--test try again, should abort old resume
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
|
#try again, should abort old resume
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000002
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
#create a resume situation, where the other side doesnt want the snapshot anymore ( should abort resume )
|
||||||
|
def test_abort_unwanted_resume(self):
|
||||||
|
|
||||||
|
if "0.6.5" in ZFS_USERSPACE:
|
||||||
|
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
#generate resume
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.generate_resume()
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
#incremental, doesnt want previous anymore
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-target=0 --debug --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
|
||||||
|
self.assertIn(": aborting resume, since", buf.getvalue())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000002
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000002
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000002
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_missing_common(self):
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
#remove common snapshot and leave nothing
|
||||||
|
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
|
||||||
|
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
############# TODO:
|
||||||
|
def test_ignoretransfererrors(self):
|
||||||
|
|
||||||
|
self.skipTest("todo: create some kind of situation where zfs recv exits with an error but transfer is still ok (happens in practice with acltype)")
|
||||||
139
test_thinner.py
Normal file
139
test_thinner.py
Normal file
@ -0,0 +1,139 @@
|
|||||||
|
from basetest import *
|
||||||
|
|
||||||
|
#randint is different in python 2 vs 3
|
||||||
|
randint_compat = lambda lo, hi: lo + int(random.random() * (hi + 1 - lo))
|
||||||
|
|
||||||
|
|
||||||
|
class Thing:
|
||||||
|
def __init__(self, timestamp):
|
||||||
|
self.timestamp=timestamp
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
# age=now-self.timestamp
|
||||||
|
struct=time.gmtime(self.timestamp)
|
||||||
|
return("{}".format(time.strftime("%Y-%m-%d %H:%M:%S",struct)))
|
||||||
|
|
||||||
|
|
||||||
|
class TestThinner(unittest2.TestCase):
|
||||||
|
|
||||||
|
# def setUp(self):
|
||||||
|
|
||||||
|
# return super().setUp()
|
||||||
|
|
||||||
|
def test_incremental(self):
|
||||||
|
ok=['2023-01-03 10:53:16',
|
||||||
|
'2024-01-02 15:43:29',
|
||||||
|
'2025-01-01 06:15:32',
|
||||||
|
'2026-01-01 02:48:23',
|
||||||
|
'2026-04-07 20:07:36',
|
||||||
|
'2026-05-07 02:30:29',
|
||||||
|
'2026-06-06 01:19:46',
|
||||||
|
'2026-07-06 06:38:09',
|
||||||
|
'2026-08-05 05:08:53',
|
||||||
|
'2026-09-04 03:33:04',
|
||||||
|
'2026-10-04 05:27:09',
|
||||||
|
'2026-11-04 04:01:17',
|
||||||
|
'2026-12-03 13:49:56',
|
||||||
|
'2027-01-01 17:02:00',
|
||||||
|
'2027-01-03 04:26:42',
|
||||||
|
'2027-02-01 14:16:02',
|
||||||
|
'2027-02-12 03:31:02',
|
||||||
|
'2027-02-18 00:33:10',
|
||||||
|
'2027-02-26 21:09:54',
|
||||||
|
'2027-03-02 08:05:18',
|
||||||
|
'2027-03-03 16:46:09',
|
||||||
|
'2027-03-04 06:39:14',
|
||||||
|
'2027-03-06 03:35:41',
|
||||||
|
'2027-03-08 12:24:42',
|
||||||
|
'2027-03-08 20:34:57']
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#some arbitrary date
|
||||||
|
now=1589229252
|
||||||
|
#we want deterministic results
|
||||||
|
random.seed(1337)
|
||||||
|
thinner=Thinner("5,10s1min,1d1w,1w1m,1m12m,1y5y")
|
||||||
|
things=[]
|
||||||
|
|
||||||
|
#thin incrementally while adding
|
||||||
|
for i in range(0,5000):
|
||||||
|
|
||||||
|
#increase random amount of time and maybe add a thing
|
||||||
|
now=now+randint_compat(0,3600*24)
|
||||||
|
if random.random()>=0.5:
|
||||||
|
things.append(Thing(now))
|
||||||
|
|
||||||
|
(keeps, removes)=thinner.thin(things, now=now)
|
||||||
|
things=keeps
|
||||||
|
|
||||||
|
|
||||||
|
result=[]
|
||||||
|
for thing in things:
|
||||||
|
result.append(str(thing))
|
||||||
|
|
||||||
|
print("Thinner result incremental:")
|
||||||
|
pprint.pprint(result)
|
||||||
|
|
||||||
|
self.assertEqual(result, ok)
|
||||||
|
|
||||||
|
|
||||||
|
def test_full(self):
|
||||||
|
ok=['2022-03-09 01:56:23',
|
||||||
|
'2023-01-03 10:53:16',
|
||||||
|
'2024-01-02 15:43:29',
|
||||||
|
'2025-01-01 06:15:32',
|
||||||
|
'2026-01-01 02:48:23',
|
||||||
|
'2026-03-14 09:08:04',
|
||||||
|
'2026-04-07 20:07:36',
|
||||||
|
'2026-05-07 02:30:29',
|
||||||
|
'2026-06-06 01:19:46',
|
||||||
|
'2026-07-06 06:38:09',
|
||||||
|
'2026-08-05 05:08:53',
|
||||||
|
'2026-09-04 03:33:04',
|
||||||
|
'2026-10-04 05:27:09',
|
||||||
|
'2026-11-04 04:01:17',
|
||||||
|
'2026-12-03 13:49:56',
|
||||||
|
'2027-01-01 17:02:00',
|
||||||
|
'2027-01-03 04:26:42',
|
||||||
|
'2027-02-01 14:16:02',
|
||||||
|
'2027-02-08 02:41:14',
|
||||||
|
'2027-02-12 03:31:02',
|
||||||
|
'2027-02-18 00:33:10',
|
||||||
|
'2027-02-26 21:09:54',
|
||||||
|
'2027-03-02 08:05:18',
|
||||||
|
'2027-03-03 16:46:09',
|
||||||
|
'2027-03-04 06:39:14',
|
||||||
|
'2027-03-06 03:35:41',
|
||||||
|
'2027-03-08 12:24:42',
|
||||||
|
'2027-03-08 20:34:57']
|
||||||
|
|
||||||
|
#some arbitrary date
|
||||||
|
now=1589229252
|
||||||
|
#we want deterministic results
|
||||||
|
random.seed(1337)
|
||||||
|
thinner=Thinner("5,10s1min,1d1w,1w1m,1m12m,1y5y")
|
||||||
|
things=[]
|
||||||
|
|
||||||
|
for i in range(0,5000):
|
||||||
|
|
||||||
|
#increase random amount of time and maybe add a thing
|
||||||
|
now=now+randint_compat(0,3600*24)
|
||||||
|
if random.random()>=0.5:
|
||||||
|
things.append(Thing(now))
|
||||||
|
|
||||||
|
(things, removes)=thinner.thin(things, now=now)
|
||||||
|
|
||||||
|
result=[]
|
||||||
|
for thing in things:
|
||||||
|
result.append(str(thing))
|
||||||
|
|
||||||
|
print("Thinner result full:")
|
||||||
|
pprint.pprint(result)
|
||||||
|
|
||||||
|
self.assertEqual(result, ok)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
720
test_zfsautobackup.py
Normal file
720
test_zfsautobackup.py
Normal file
@ -0,0 +1,720 @@
|
|||||||
|
from basetest import *
|
||||||
|
import time
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class TestZfsAutobackup(unittest2.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
prepare_zpools()
|
||||||
|
self.longMessage=True
|
||||||
|
|
||||||
|
def test_invalidpars(self):
|
||||||
|
|
||||||
|
self.assertEqual(ZfsAutobackup("test test_target1 --keep-source -1".split(" ")).run(), 255)
|
||||||
|
|
||||||
|
|
||||||
|
def test_defaults(self):
|
||||||
|
|
||||||
|
with self.subTest("no datasets selected"):
|
||||||
|
#should resume and succeed
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stderr(buf):
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
#correct message?
|
||||||
|
self.assertIn("No source filesystems selected", buf.getvalue())
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("defaults with full verbose and debug"):
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
with self.subTest("bare defaults, allow empty"):
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1@test-20101111000001
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs2/sub@test-20101111000001
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||||
|
""")
|
||||||
|
|
||||||
|
with self.subTest("verify holds"):
|
||||||
|
|
||||||
|
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_source1 userrefs - -
|
||||||
|
test_source1/fs1 userrefs - -
|
||||||
|
test_source1/fs1@test-20101111000000 userrefs 0 -
|
||||||
|
test_source1/fs1@test-20101111000001 userrefs 1 -
|
||||||
|
test_source1/fs1/sub userrefs - -
|
||||||
|
test_source1/fs1/sub@test-20101111000000 userrefs 0 -
|
||||||
|
test_source1/fs1/sub@test-20101111000001 userrefs 1 -
|
||||||
|
test_source2 userrefs - -
|
||||||
|
test_source2/fs2 userrefs - -
|
||||||
|
test_source2/fs2/sub userrefs - -
|
||||||
|
test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
||||||
|
test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
||||||
|
test_source2/fs3 userrefs - -
|
||||||
|
test_source2/fs3/sub userrefs - -
|
||||||
|
test_target1 userrefs - -
|
||||||
|
test_target1/test_source1 userrefs - -
|
||||||
|
test_target1/test_source1/fs1 userrefs - -
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000 userrefs 0 -
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001 userrefs 1 -
|
||||||
|
test_target1/test_source1/fs1/sub userrefs - -
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000 userrefs 0 -
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000001 userrefs 1 -
|
||||||
|
test_target1/test_source2 userrefs - -
|
||||||
|
test_target1/test_source2/fs2 userrefs - -
|
||||||
|
test_target1/test_source2/fs2/sub userrefs - -
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def test_ignore_othersnaphots(self):
|
||||||
|
|
||||||
|
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
||||||
|
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@othersimple
|
||||||
|
test_source1/fs1@otherdate-20001111000000
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
def test_othersnaphots(self):
|
||||||
|
|
||||||
|
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
||||||
|
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --other-snapshots".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@othersimple
|
||||||
|
test_source1/fs1@otherdate-20001111000000
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@othersimple
|
||||||
|
test_target1/test_source1/fs1@otherdate-20001111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_nosnapshot(self):
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
#(only parents are created )
|
||||||
|
#TODO: it probably shouldn't create these
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_nosend(self):
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
#(only parents are created )
|
||||||
|
#TODO: it probably shouldn't create these
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_ignorereplicated(self):
|
||||||
|
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --ignore-replicated".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
#(only parents are created )
|
||||||
|
#TODO: it probably shouldn't create these
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@otherreplication
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
def test_noholds(self):
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_source1 userrefs - -
|
||||||
|
test_source1/fs1 userrefs - -
|
||||||
|
test_source1/fs1@test-20101111000000 userrefs 0 -
|
||||||
|
test_source1/fs1/sub userrefs - -
|
||||||
|
test_source1/fs1/sub@test-20101111000000 userrefs 0 -
|
||||||
|
test_source2 userrefs - -
|
||||||
|
test_source2/fs2 userrefs - -
|
||||||
|
test_source2/fs2/sub userrefs - -
|
||||||
|
test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
||||||
|
test_source2/fs3 userrefs - -
|
||||||
|
test_source2/fs3/sub userrefs - -
|
||||||
|
test_target1 userrefs - -
|
||||||
|
test_target1/test_source1 userrefs - -
|
||||||
|
test_target1/test_source1/fs1 userrefs - -
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000 userrefs 1 -
|
||||||
|
test_target1/test_source1/fs1/sub userrefs - -
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000 userrefs 1 -
|
||||||
|
test_target1/test_source2 userrefs - -
|
||||||
|
test_target1/test_source2/fs2 userrefs - -
|
||||||
|
test_target1/test_source2/fs2/sub userrefs - -
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 1 -
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_strippath(self):
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/fs1
|
||||||
|
test_target1/fs1@test-20101111000000
|
||||||
|
test_target1/fs1/sub
|
||||||
|
test_target1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/fs2
|
||||||
|
test_target1/fs2/sub
|
||||||
|
test_target1/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_clearrefres(self):
|
||||||
|
|
||||||
|
#on zfs utils 0.6.x -x isnt supported
|
||||||
|
r=shelltest("zfs recv -x bla test >/dev/null </dev/zero; echo $?")
|
||||||
|
if r=="\n2\n":
|
||||||
|
self.skipTest("This zfs-userspace version doesnt support -x")
|
||||||
|
|
||||||
|
r=shelltest("zfs set refreservation=1M test_source1/fs1")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --clear-refreservation".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs get refreservation -r test_source1 test_source2 test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_source1 refreservation none default
|
||||||
|
test_source1/fs1 refreservation 1M local
|
||||||
|
test_source1/fs1@test-20101111000000 refreservation - -
|
||||||
|
test_source1/fs1/sub refreservation none default
|
||||||
|
test_source1/fs1/sub@test-20101111000000 refreservation - -
|
||||||
|
test_source2 refreservation none default
|
||||||
|
test_source2/fs2 refreservation none default
|
||||||
|
test_source2/fs2/sub refreservation none default
|
||||||
|
test_source2/fs2/sub@test-20101111000000 refreservation - -
|
||||||
|
test_source2/fs3 refreservation none default
|
||||||
|
test_source2/fs3/sub refreservation none default
|
||||||
|
test_target1 refreservation none default
|
||||||
|
test_target1/test_source1 refreservation none default
|
||||||
|
test_target1/test_source1/fs1 refreservation none default
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000 refreservation - -
|
||||||
|
test_target1/test_source1/fs1/sub refreservation none default
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000 refreservation - -
|
||||||
|
test_target1/test_source2 refreservation none default
|
||||||
|
test_target1/test_source2/fs2 refreservation none default
|
||||||
|
test_target1/test_source2/fs2/sub refreservation none default
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000 refreservation - -
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_clearmount(self):
|
||||||
|
|
||||||
|
#on zfs utils 0.6.x -o isnt supported
|
||||||
|
r=shelltest("zfs recv -o bla=1 test >/dev/null </dev/zero; echo $?")
|
||||||
|
if r=="\n2\n":
|
||||||
|
self.skipTest("This zfs-userspace version doesnt support -o")
|
||||||
|
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --clear-mountpoint".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs get canmount -r test_source1 test_source2 test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_source1 canmount on default
|
||||||
|
test_source1/fs1 canmount on default
|
||||||
|
test_source1/fs1@test-20101111000000 canmount - -
|
||||||
|
test_source1/fs1/sub canmount on default
|
||||||
|
test_source1/fs1/sub@test-20101111000000 canmount - -
|
||||||
|
test_source2 canmount on default
|
||||||
|
test_source2/fs2 canmount on default
|
||||||
|
test_source2/fs2/sub canmount on default
|
||||||
|
test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||||
|
test_source2/fs3 canmount on default
|
||||||
|
test_source2/fs3/sub canmount on default
|
||||||
|
test_target1 canmount on default
|
||||||
|
test_target1/test_source1 canmount on default
|
||||||
|
test_target1/test_source1/fs1 canmount noauto local
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000 canmount - -
|
||||||
|
test_target1/test_source1/fs1/sub canmount noauto local
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000 canmount - -
|
||||||
|
test_target1/test_source2 canmount on default
|
||||||
|
test_target1/test_source2/fs2 canmount on default
|
||||||
|
test_target1/test_source2/fs2/sub canmount noauto local
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_rollback(self):
|
||||||
|
|
||||||
|
#initial backup
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
#make change
|
||||||
|
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
||||||
|
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
#should fail (busy)
|
||||||
|
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
#rollback, should succeed
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --rollback".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
def test_destroyincompat(self):
|
||||||
|
|
||||||
|
#initial backup
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
#add multiple compatible snapshot (written is still 0)
|
||||||
|
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
|
||||||
|
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible2")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
#should be ok, is compatible
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
#add incompatible snapshot by changing and snapshotting
|
||||||
|
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
||||||
|
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
|
||||||
|
r=shelltest("zfs snapshot test_target1/test_source1/fs1@incompatible1")
|
||||||
|
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
#--test should fail, now incompatible
|
||||||
|
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty --test".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
#should fail, now incompatible
|
||||||
|
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000003"):
|
||||||
|
#--test should succeed by destroying incompatibles
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000003"):
|
||||||
|
#should succeed by destroying incompatibles
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --destroy-incompatible".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def test_keepsourcetarget(self):
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
#should still have all snapshots
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1@test-20101111000001
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs2/sub@test-20101111000001
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
#run again with keep=0
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source=0 --keep-target=0".split(" ")).run())
|
||||||
|
|
||||||
|
#should only have last snapshots
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000002
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000002
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000002
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000002
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000002
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000002
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_ssh(self):
|
||||||
|
|
||||||
|
#test all ssh directions
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-source localhost".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-target localhost".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1@test-20101111000001
|
||||||
|
test_source1/fs1@test-20101111000002
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_source1/fs1/sub@test-20101111000002
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs2/sub@test-20101111000001
|
||||||
|
test_source2/fs2/sub@test-20101111000002
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1@test-20101111000002
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000002
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000002
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_minchange(self):
|
||||||
|
|
||||||
|
#initial
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
|
#make small change, use umount to reflect the changes immediately
|
||||||
|
r=shelltest("zfs set compress=off test_source1")
|
||||||
|
r=shelltest("touch /test_source1/fs1/change.txt")
|
||||||
|
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
|
||||||
|
|
||||||
|
|
||||||
|
#too small change, takes no snapshots
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
|
#make big change
|
||||||
|
r=shelltest("dd if=/dev/zero of=/test_source1/fs1/change.txt bs=200000 count=1")
|
||||||
|
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
|
||||||
|
|
||||||
|
#bigger change, should take snapshot
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1@test-20101111000002
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000002
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
def test_test(self):
|
||||||
|
|
||||||
|
#initial
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
""")
|
||||||
|
|
||||||
|
#actual make initial backup
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
#test incremental
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000001
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000001
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
###########################
|
||||||
|
# TODO:
|
||||||
|
|
||||||
|
def test_raw(self):
|
||||||
|
|
||||||
|
self.skipTest("todo: later when travis supports zfs 0.8")
|
||||||
|
|
||||||
123
test_zfsnode.py
Normal file
123
test_zfsnode.py
Normal file
@ -0,0 +1,123 @@
|
|||||||
|
from basetest import *
|
||||||
|
|
||||||
|
|
||||||
|
class TestZfsNode(unittest2.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
prepare_zpools()
|
||||||
|
# return super().setUp()
|
||||||
|
|
||||||
|
|
||||||
|
def test_consistent_snapshot(self):
|
||||||
|
logger=Logger()
|
||||||
|
description="[Source]"
|
||||||
|
node=ZfsNode("test", logger, description=description)
|
||||||
|
|
||||||
|
with self.subTest("first snapshot"):
|
||||||
|
node.consistent_snapshot(node.selected_datasets, "test-1",100000)
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-1
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-1
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-1
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("second snapshot, no changes, no snapshot"):
|
||||||
|
node.consistent_snapshot(node.selected_datasets, "test-2",1)
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-1
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-1
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-1
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
""")
|
||||||
|
|
||||||
|
with self.subTest("second snapshot, no changes, empty snapshot"):
|
||||||
|
node.consistent_snapshot(node.selected_datasets, "test-2",0)
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-1
|
||||||
|
test_source1/fs1@test-2
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-1
|
||||||
|
test_source1/fs1/sub@test-2
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-1
|
||||||
|
test_source2/fs2/sub@test-2
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_getselected(self):
|
||||||
|
logger=Logger()
|
||||||
|
description="[Source]"
|
||||||
|
node=ZfsNode("test", logger, description=description)
|
||||||
|
s=pformat(node.selected_datasets)
|
||||||
|
print(s)
|
||||||
|
|
||||||
|
#basics
|
||||||
|
self.assertEqual (s, """[(local): test_source1/fs1,
|
||||||
|
(local): test_source1/fs1/sub,
|
||||||
|
(local): test_source2/fs2/sub]""")
|
||||||
|
|
||||||
|
#caching, so expect same result after changing it
|
||||||
|
subprocess.check_call("zfs set autobackup:test=true test_source2/fs3", shell=True)
|
||||||
|
self.assertEqual (s, """[(local): test_source1/fs1,
|
||||||
|
(local): test_source1/fs1/sub,
|
||||||
|
(local): test_source2/fs2/sub]""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_validcommand(self):
|
||||||
|
logger=Logger()
|
||||||
|
description="[Source]"
|
||||||
|
node=ZfsNode("test", logger, description=description)
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("test invalid option"):
|
||||||
|
self.assertFalse(node.valid_command(["zfs", "send", "--invalid-option", "nonexisting"]))
|
||||||
|
with self.subTest("test valid option"):
|
||||||
|
self.assertTrue(node.valid_command(["zfs", "send", "-v", "nonexisting"]))
|
||||||
|
|
||||||
|
def test_supportedsendoptions(self):
|
||||||
|
logger=Logger()
|
||||||
|
description="[Source]"
|
||||||
|
node=ZfsNode("test", logger, description=description)
|
||||||
|
# -D propably always supported
|
||||||
|
self.assertGreater(len(node.supported_send_options),0)
|
||||||
|
|
||||||
|
|
||||||
|
def test_supportedrecvoptions(self):
|
||||||
|
logger=Logger()
|
||||||
|
description="[Source]"
|
||||||
|
#NOTE: this couldnt hang via ssh if we dont close filehandles properly. (which was a previous bug)
|
||||||
|
node=ZfsNode("test", logger, description=description, ssh_to='localhost')
|
||||||
|
self.assertIsInstance(node.supported_recv_options, list)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
Reference in New Issue
Block a user