Compare commits
110 Commits
v3.0-rc9-1
...
v3.0
| Author | SHA1 | Date | |
|---|---|---|---|
| fd7015b77a | |||
| f524845dbb | |||
| 51c15ec618 | |||
| 9fe13a4207 | |||
| 7b8b536d53 | |||
| 122035dfef | |||
| 7b278be0b9 | |||
| cc1a9a3d72 | |||
| eaad31e8b4 | |||
| 470b4aaf55 | |||
| fc3026abdc | |||
| 0b1081e87f | |||
| 8699ec5c69 | |||
| cba6470500 | |||
| d08f7bf3c1 | |||
| d19cb2c842 | |||
| f2b284c407 | |||
| a6cdd4b89e | |||
| 8176326126 | |||
| ad2542e930 | |||
| b926c86a7b | |||
| 915a29b36e | |||
| f363142926 | |||
| 27c598344b | |||
| ce817eb05c | |||
| b97bde3f6d | |||
| fa14dcdce1 | |||
| c34bf22f4e | |||
| 01425e735d | |||
| 56a2f26dfa | |||
| 8729fcac74 | |||
| 6151096dc8 | |||
| 1e8b02db28 | |||
| 6d69c8f2b4 | |||
| fc853622dd | |||
| 37a9f49d8d | |||
| ff33f46cb8 | |||
| e1610b6874 | |||
| 10c45affd7 | |||
| 4d12b8da5f | |||
| 58e098324e | |||
| 1ffd9a15a3 | |||
| e54c275685 | |||
| ee4fade2e4 | |||
| c5f1a87c40 | |||
| 2fe854905d | |||
| 1c86c6f866 | |||
| 8bb9769a8b | |||
| 3ef7c32237 | |||
| c254ad3b82 | |||
| fb7da316f8 | |||
| fedae35221 | |||
| 1cb26d48b6 | |||
| 87e0599130 | |||
| 252086e2e6 | |||
| 4d15f29b5b | |||
| 3bc37d143c | |||
| 4dc4bdbba5 | |||
| d2fe9b9ec7 | |||
| 2143d22ae5 | |||
| 138c913e58 | |||
| 2305fdf033 | |||
| 797fb23baa | |||
| 82c7ac5e53 | |||
| 293ab1d075 | |||
| 50e94baf4e | |||
| 47bd4ed490 | |||
| 0d26420b15 | |||
| 3e243a0916 | |||
| 499ccc6fd0 | |||
| ca294f9dd6 | |||
| 9772fc80cf | |||
| 83905c4614 | |||
| 7a3c309123 | |||
| 022dc1b7fc | |||
| 136289b4d6 | |||
| 5bf49cf19e | |||
| 735938eded | |||
| d83fa2f97f | |||
| b0db6d13cc | |||
| c7762d8163 | |||
| bc17825582 | |||
| b22113aad4 | |||
| 4e1bfd8cba | |||
| 0388026f94 | |||
| b718e282b1 | |||
| b6fb07a436 | |||
| 4f78f0cd22 | |||
| 2fa95f098b | |||
| c864e5ffad | |||
| 6f6a2ceee2 | |||
| 0813a8cef6 | |||
| 55f491915a | |||
| 04971f2f29 | |||
| e1344dd9da | |||
| ea390df6f6 | |||
| 9be1f334cb | |||
| de877362c9 | |||
| 9b1254a6d9 | |||
| c110943f20 | |||
| e94eb11f63 | |||
| 0d498e3f44 | |||
| dd301dc422 | |||
| 9e6d90adfe | |||
| a6b688c976 | |||
| 10f1290ad9 | |||
| b51eefa139 | |||
| 805d7e3536 | |||
| 8f0472e8f5 | |||
| 002aa6a731 |
3
.gitignore
vendored
3
.gitignore
vendored
@ -6,3 +6,6 @@ build/
|
||||
zfs_autobackup.egg-info
|
||||
.eggs/
|
||||
__pycache__
|
||||
.coverage
|
||||
*.pyc
|
||||
python2.env
|
||||
|
||||
31
.travis.yml
Normal file
31
.travis.yml
Normal file
@ -0,0 +1,31 @@
|
||||
|
||||
jobs:
|
||||
include:
|
||||
- os: linux
|
||||
dist: xenial
|
||||
language: python
|
||||
python: 2.7
|
||||
- os: linux
|
||||
dist: xenial
|
||||
language: python
|
||||
python: 3.6
|
||||
- os: linux
|
||||
dist: bionic
|
||||
language: python
|
||||
python: 2.7
|
||||
- os: linux
|
||||
dist: bionic
|
||||
language: python
|
||||
python: 3.6
|
||||
|
||||
|
||||
|
||||
|
||||
before_install:
|
||||
- sudo apt-get update
|
||||
- sudo apt-get install zfsutils-linux
|
||||
|
||||
script:
|
||||
# - sudo -E ./ngrok.sh
|
||||
- sudo -E ./run_tests
|
||||
# - sudo -E pip --version
|
||||
197
README.md
197
README.md
@ -1,9 +1,12 @@
|
||||
# ZFS autobackup
|
||||
|
||||
[](https://coveralls.io/github/psy0rz/zfs_autobackup) [](https://travis-ci.org/psy0rz/zfs_autobackup)
|
||||
|
||||
## New in v3
|
||||
|
||||
* Complete rewrite, cleaner object oriented code.
|
||||
* Python 3 and 2 support.
|
||||
* Automated regression against real ZFS environment.
|
||||
* Installable via [pip](https://pypi.org/project/zfs-autobackup/).
|
||||
* Backwards compatible with your current backups and parameters.
|
||||
* Progressive thinning (via a destroy schedule. default schedule should be fine for most people)
|
||||
@ -19,9 +22,12 @@
|
||||
* Supports raw backups for encryption.
|
||||
* Custom SSH client config.
|
||||
|
||||
|
||||
## Introduction
|
||||
|
||||
This is a tool I wrote to make replicating ZFS datasets easy and reliable. You can either use it as a backup tool or as a replication tool.
|
||||
This is a tool I wrote to make replicating ZFS datasets easy and reliable.
|
||||
|
||||
You can either use it as a **backup** tool, **replication** tool or **snapshot** tool.
|
||||
|
||||
You can select what to backup by setting a custom `ZFS property`. This allows you to set and forget: Configure it so it backups your entire pool, and you never have to worry about backupping again. Even new datasets you create later will be backupped.
|
||||
|
||||
@ -31,13 +37,13 @@ Since its using ZFS commands, you can see what its actually doing by specifying
|
||||
|
||||
An important feature thats missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your zfs datasets.
|
||||
|
||||
Another nice thing is progress reporting with `--progress`. Its very useful with HUGE datasets, when you want to know how many hours/days it will take.
|
||||
Another nice thing is progress reporting: Its very useful with HUGE datasets, when you want to know how many hours/days it will take.
|
||||
|
||||
zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
||||
|
||||
## Features
|
||||
|
||||
* Works across operating systems: Tested with Linux, FreeBSD/FreeNAS and SmartOS.
|
||||
* Works across operating systems: Tested with **Linux**, **FreeBSD/FreeNAS** and **SmartOS**.
|
||||
* Works in combination with existing replication systems. (Like Proxmox HA)
|
||||
* Automatically selects filesystems to backup by looking at a simple ZFS property. (recursive)
|
||||
* Creates consistent snapshots. (takes all snapshots at once, atomic.)
|
||||
@ -48,13 +54,16 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
||||
* Or even pull data from a server while pushing the backup to another server.
|
||||
* Can be scheduled via a simple cronjob or run directly from commandline.
|
||||
* Supports resuming of interrupted transfers. (via the zfs extensible_dataset feature)
|
||||
* Backups and snapshots can be named to prevent conflicts. (multiple backups from and to the same filesystems are no problem)
|
||||
* Backups and snapshots can be named to prevent conflicts. (multiple backups from and to the same datasets are no problem)
|
||||
* Always creates a new snapshot before starting.
|
||||
* Checks everything but tries continue on non-fatal errors when possible. (Reports error-count when done)
|
||||
* Ability to 'finish' aborted backups to see what goes wrong.
|
||||
* Ability to manually 'finish' failed backups to see whats going on.
|
||||
* Easy to debug and has a test-mode. Actual unix commands are printed.
|
||||
* Keeps latest X snapshots remote and locally. (default 30, configurable)
|
||||
* Uses **progressive thinning** for older snapshots.
|
||||
* Uses zfs-holds on important snapshots so they cant be accidentally destroyed.
|
||||
* Automatic resuming of failed transfers.
|
||||
* Can continue from existing common snapshots. (e.g. easy migration)
|
||||
* Gracefully handles destroyed datasets on source.
|
||||
* Easy installation:
|
||||
* Just install zfs-autobackup via pip, or download it manually.
|
||||
* Written in python and uses zfs-commands, no 3rd party dependency's or libraries.
|
||||
@ -84,13 +93,13 @@ On older servers you might have to use easy_install
|
||||
|
||||
Its also possible to just download <https://raw.githubusercontent.com/psy0rz/zfs_autobackup/master/bin/zfs-autobackup> and run it directly.
|
||||
|
||||
The only requirement that is sometimes missing is the `argparse` python module. Optionally you can install `colorma` for colors.
|
||||
The only requirement that is sometimes missing is the `argparse` python module. Optionally you can install `colorama` for colors.
|
||||
|
||||
It should work with python 2.7 and higher.
|
||||
|
||||
## Example
|
||||
|
||||
In this example we're going to backup a machine called `pve` to a machine called `backup`.
|
||||
In this example we're going to backup a machine called `server1` to a machine called `backup`.
|
||||
|
||||
### Setup SSH login
|
||||
|
||||
@ -98,7 +107,7 @@ zfs-autobackup needs passwordless login via ssh. This means generating an ssh ke
|
||||
|
||||
#### Generate SSH key on `backup`
|
||||
|
||||
On the server that runs zfs-autobackup you need to create an SSH key. You only need to do this once.
|
||||
On the backup-server that runs zfs-autobackup you need to create an SSH key. You only need to do this once.
|
||||
|
||||
Use the `ssh-keygen` command and leave the passphrase empty:
|
||||
|
||||
@ -127,14 +136,14 @@ The key's randomart image is:
|
||||
root@backup:~#
|
||||
```
|
||||
|
||||
#### Copy SSH key to `pve`
|
||||
#### Copy SSH key to `server1`
|
||||
|
||||
Now you need to copy the public part of the key to `pve`
|
||||
Now you need to copy the public part of the key to `server1`
|
||||
|
||||
The `ssh-copy-id` command is a handy tool to automate this. It will just ask for your password.
|
||||
|
||||
```console
|
||||
root@backup:~# ssh-copy-id root@pve.server.com
|
||||
root@backup:~# ssh-copy-id root@server1.server.com
|
||||
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
|
||||
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
|
||||
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
|
||||
@ -142,11 +151,12 @@ Password:
|
||||
|
||||
Number of key(s) added: 1
|
||||
|
||||
Now try logging into the machine, with: "ssh 'root@pve.server.com'"
|
||||
Now try logging into the machine, with: "ssh 'root@server1.server.com'"
|
||||
and check to make sure that only the key(s) you wanted were added.
|
||||
|
||||
root@backup:~#
|
||||
```
|
||||
This allows the backup-server to login to `server1` as root without password.
|
||||
|
||||
### Select filesystems to backup
|
||||
|
||||
@ -155,12 +165,12 @@ Its important to choose a unique and consistent backup name. In this case we nam
|
||||
On the source zfs system set the ```autobackup:offsite1``` zfs property to true:
|
||||
|
||||
```console
|
||||
[root@pve ~]# zfs set autobackup:offsite1=true rpool
|
||||
[root@pve ~]# zfs get -t filesystem,volume autobackup:offsite1
|
||||
[root@server1 ~]# zfs set autobackup:offsite1=true rpool
|
||||
[root@server1 ~]# zfs get -t filesystem,volume autobackup:offsite1
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
rpool autobackup:offsite1 true local
|
||||
rpool/ROOT autobackup:offsite1 true inherited from rpool
|
||||
rpool/ROOT/pve-1 autobackup:offsite1 true inherited from rpool
|
||||
rpool/ROOT/server1-1 autobackup:offsite1 true inherited from rpool
|
||||
rpool/data autobackup:offsite1 true inherited from rpool
|
||||
rpool/data/vm-100-disk-0 autobackup:offsite1 true inherited from rpool
|
||||
rpool/swap autobackup:offsite1 true inherited from rpool
|
||||
@ -170,12 +180,12 @@ rpool/swap autobackup:offsite1 true
|
||||
Because we don't want to backup everything, we can exclude certain filesystem by setting the property to false:
|
||||
|
||||
```console
|
||||
[root@pve ~]# zfs set autobackup:offsite1=false rpool/swap
|
||||
[root@pve ~]# zfs get -t filesystem,volume autobackup:offsite1
|
||||
[root@server1 ~]# zfs set autobackup:offsite1=false rpool/swap
|
||||
[root@server1 ~]# zfs get -t filesystem,volume autobackup:offsite1
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
rpool autobackup:offsite1 true local
|
||||
rpool/ROOT autobackup:offsite1 true inherited from rpool
|
||||
rpool/ROOT/pve-1 autobackup:offsite1 true inherited from rpool
|
||||
rpool/ROOT/server1-1 autobackup:offsite1 true inherited from rpool
|
||||
rpool/data autobackup:offsite1 true inherited from rpool
|
||||
rpool/data/vm-100-disk-0 autobackup:offsite1 true inherited from rpool
|
||||
rpool/swap autobackup:offsite1 false local
|
||||
@ -187,10 +197,10 @@ rpool/swap autobackup:offsite1 false
|
||||
Run the script on the backup server and pull the data from the server specified by --ssh-source.
|
||||
|
||||
```console
|
||||
[root@backup ~]# zfs-autobackup --ssh-source pve.server.com offsite1 backup/pve --progress --verbose
|
||||
[root@backup ~]# zfs-autobackup --ssh-source server1.server.com offsite1 backup/server1 --progress --verbose
|
||||
|
||||
#### Settings summary
|
||||
[Source] Datasets on: pve.server.com
|
||||
[Source] Datasets on: server1.server.com
|
||||
[Source] Keep the last 10 snapshots.
|
||||
[Source] Keep every 1 day, delete after 1 week.
|
||||
[Source] Keep every 1 week, delete after 1 month.
|
||||
@ -202,12 +212,12 @@ Run the script on the backup server and pull the data from the server specified
|
||||
[Target] Keep every 1 day, delete after 1 week.
|
||||
[Target] Keep every 1 week, delete after 1 month.
|
||||
[Target] Keep every 1 month, delete after 1 year.
|
||||
[Target] Receive datasets under: backup/pve
|
||||
[Target] Receive datasets under: backup/server1
|
||||
|
||||
#### Selecting
|
||||
[Source] rpool: Selected (direct selection)
|
||||
[Source] rpool/ROOT: Selected (inherited selection)
|
||||
[Source] rpool/ROOT/pve-1: Selected (inherited selection)
|
||||
[Source] rpool/ROOT/server1-1: Selected (inherited selection)
|
||||
[Source] rpool/data: Selected (inherited selection)
|
||||
[Source] rpool/data/vm-100-disk-0: Selected (inherited selection)
|
||||
[Source] rpool/swap: Ignored (disabled)
|
||||
@ -219,13 +229,13 @@ Run the script on the backup server and pull the data from the server specified
|
||||
[Source] Creating snapshot offsite1-20200218180123
|
||||
|
||||
#### Sending and thinning
|
||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218175435: receiving full
|
||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218175547: receiving incremental
|
||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218175706: receiving incremental
|
||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218180049: receiving incremental
|
||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218180123: receiving incremental
|
||||
[Target] backup/pve/rpool/data@offsite1-20200218175435: receiving full
|
||||
[Target] backup/pve/rpool/data/vm-100-disk-0@offsite1-20200218175435: receiving full
|
||||
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218175435: receiving full
|
||||
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218175547: receiving incremental
|
||||
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218175706: receiving incremental
|
||||
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218180049: receiving incremental
|
||||
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218180123: receiving incremental
|
||||
[Target] backup/server1/rpool/data@offsite1-20200218175435: receiving full
|
||||
[Target] backup/server1/rpool/data/vm-100-disk-0@offsite1-20200218175435: receiving full
|
||||
...
|
||||
```
|
||||
|
||||
@ -243,7 +253,45 @@ Once you've got the correct settings for your situation, you can just store the
|
||||
|
||||
Or just create a script and run it manually when you need it.
|
||||
|
||||
### Thinning out obsolete snapshots
|
||||
## Use as snapshot tool
|
||||
|
||||
You can use zfs-autobackup to only make snapshots.
|
||||
|
||||
Just dont specify the target-path:
|
||||
```console
|
||||
root@ws1:~# zfs-autobackup test --verbose
|
||||
zfs-autobackup v3.0-rc12 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
||||
|
||||
#### Source settings
|
||||
[Source] Datasets are local
|
||||
[Source] Keep the last 10 snapshots.
|
||||
[Source] Keep every 1 day, delete after 1 week.
|
||||
[Source] Keep every 1 week, delete after 1 month.
|
||||
[Source] Keep every 1 month, delete after 1 year.
|
||||
[Source] Selects all datasets that have property 'autobackup:test=true' (or childs of datasets that have 'autobackup:test=child')
|
||||
|
||||
#### Selecting
|
||||
[Source] test_source1/fs1: Selected (direct selection)
|
||||
[Source] test_source1/fs1/sub: Selected (inherited selection)
|
||||
[Source] test_source2/fs2: Ignored (only childs)
|
||||
[Source] test_source2/fs2/sub: Selected (inherited selection)
|
||||
|
||||
#### Snapshotting
|
||||
[Source] Creating snapshots test-20200710125958 in pool test_source1
|
||||
[Source] Creating snapshots test-20200710125958 in pool test_source2
|
||||
|
||||
#### Thinning source
|
||||
[Source] test_source1/fs1@test-20200710125948: Destroying
|
||||
[Source] test_source1/fs1/sub@test-20200710125948: Destroying
|
||||
[Source] test_source2/fs2/sub@test-20200710125948: Destroying
|
||||
|
||||
#### All operations completed successfully
|
||||
(No target_path specified, only operated as snapshot tool.)
|
||||
```
|
||||
|
||||
This also allows you to make several snapshots during the day, but only backup the data at night when the server is not busy.
|
||||
|
||||
## Thinning out obsolete snapshots
|
||||
|
||||
The thinner is the thing that destroys old snapshots on the source and target.
|
||||
|
||||
@ -251,7 +299,25 @@ The thinner operates "stateless": There is nothing in the name or properties of
|
||||
|
||||
Note that the thinner will ONLY destroy snapshots that are matching the naming pattern of zfs-autobackup. If you use `--other-snapshots`, it wont destroy those snapshots after replicating them to the target.
|
||||
|
||||
#### Thinning schedule
|
||||
### Destroying missing datasets
|
||||
|
||||
When a dataset has been destroyed or deselected on the source, but still exists on the target we call it a missing dataset. Missing datasets will be still thinned out according to the schedule.
|
||||
|
||||
The final snapshot will never be destroyed, unless you specify a **deadline** with the `--destroy-missing` option:
|
||||
|
||||
In that case it will look at the last snapshot we took and determine if is older than the deadline you specified. e.g: `--destroy-missing 30d` will start destroying things 30 days after the last snapshot.
|
||||
|
||||
#### After the deadline
|
||||
|
||||
When the deadline is passed, all our snapshots, except the last one will be destroyed. Irregardless of the normal thinning schedule.
|
||||
|
||||
The dataset has to have the following properties to be finally really destroyed:
|
||||
|
||||
* The dataset has no direct child-filesystems or volumes.
|
||||
* The only snapshot left is the last one created by zfs-autobackup.
|
||||
* The remaining snapshot has no clones.
|
||||
|
||||
### Thinning schedule
|
||||
|
||||
The default thinning schedule is: `10,1d1w,1w1m,1m1y`.
|
||||
|
||||
@ -286,13 +352,13 @@ You can specify as many rules as you need. The order of the rules doesn't matter
|
||||
|
||||
Keep in mind its up to you to actually run zfs-autobackup often enough: If you want to keep hourly snapshots, you have to make sure you at least run it every hour.
|
||||
|
||||
However, its no problem if you run it more or less often than that: The thinner will still do its best to choose an optimal set of snapshots to choose.
|
||||
However, its no problem if you run it more or less often than that: The thinner will still keep an optimal set of snapshots to match your schedule as good as possible.
|
||||
|
||||
If you want to keep as few snapshots as possible, just specify 0. (`--keep-source=0` for example)
|
||||
|
||||
If you want to keep ALL the snapshots, just specify a very high number.
|
||||
|
||||
#### More details about the Thinner
|
||||
### More details about the Thinner
|
||||
|
||||
We will give a practical example of how the thinner operates.
|
||||
|
||||
@ -324,11 +390,10 @@ Snapshots on the source that still have to be send to the target wont be destroy
|
||||
## Tips
|
||||
|
||||
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
|
||||
* You can split up the snapshotting and sending tasks by creating two cronjobs. Use ```--no-send``` for the snapshotter-cronjob and use ```--no-snapshot``` for the send-cronjob. This is usefull if you only want to send at night or if your send take too long.
|
||||
* You can split up the snapshotting and sending tasks by creating two cronjobs. Create a separate snapshotter-cronjob by just omitting target-path.
|
||||
* Set the ```readonly``` property of the target filesystem to ```on```. This prevents changes on the target side. (Normally, if there are changes the next backup will fail and will require a zfs rollback.) Note that readonly means you cant change the CONTENTS of the dataset directly. Its still possible to receive new datasets and manipulate properties etc.
|
||||
* Use ```--clear-refreservation``` to save space on your backup server.
|
||||
* Use ```--clear-mountpoint``` to prevent the target server from mounting the backupped filesystem in the wrong place during a reboot.
|
||||
* Use ```--resume``` to be able to resume aborted backups. (not all zfs versions support this)
|
||||
|
||||
### Speeding up SSH
|
||||
|
||||
@ -374,22 +439,24 @@ usage: zfs-autobackup [-h] [--ssh-config SSH_CONFIG] [--ssh-source SSH_SOURCE]
|
||||
[--keep-target KEEP_TARGET] [--other-snapshots]
|
||||
[--no-snapshot] [--no-send] [--min-change MIN_CHANGE]
|
||||
[--allow-empty] [--ignore-replicated] [--no-holds]
|
||||
[--resume] [--strip-path STRIP_PATH]
|
||||
[--clear-refreservation] [--clear-mountpoint]
|
||||
[--strip-path STRIP_PATH] [--clear-refreservation]
|
||||
[--clear-mountpoint]
|
||||
[--filter-properties FILTER_PROPERTIES]
|
||||
[--set-properties SET_PROPERTIES] [--rollback]
|
||||
[--destroy-incompatible] [--ignore-transfer-errors]
|
||||
[--raw] [--test] [--verbose] [--debug] [--debug-output]
|
||||
[--progress]
|
||||
backup_name target_path
|
||||
backup-name [target-path]
|
||||
|
||||
zfs-autobackup v3.0-rc8 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
||||
zfs-autobackup v3.0-rc12 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
||||
|
||||
positional arguments:
|
||||
backup_name Name of the backup (you should set the zfs property
|
||||
backup-name Name of the backup (you should set the zfs property
|
||||
"autobackup:backup-name" to true on filesystems you
|
||||
want to backup
|
||||
target_path Target ZFS filesystem
|
||||
target-path Target ZFS filesystem (optional: if not specified,
|
||||
zfs-autobackup will only operate as snapshot-tool on
|
||||
source)
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
@ -409,29 +476,23 @@ optional arguments:
|
||||
10,1d1w,1w1m,1m1y
|
||||
--other-snapshots Send over other snapshots as well, not just the ones
|
||||
created by this tool.
|
||||
--no-snapshot Dont create new snapshots (usefull for finishing
|
||||
--no-snapshot Don't create new snapshots (useful for finishing
|
||||
uncompleted backups, or cleanups)
|
||||
--no-send Dont send snapshots (usefull for cleanups, or if you
|
||||
want a separate send-cronjob)
|
||||
--no-send Don't send snapshots (useful for cleanups, or if you
|
||||
want a serperate send-cronjob)
|
||||
--min-change MIN_CHANGE
|
||||
Number of bytes written after which we consider a
|
||||
dataset changed (default 1)
|
||||
--allow-empty If nothing has changed, still create empty snapshots.
|
||||
(same as --min-change=0)
|
||||
--ignore-replicated Ignore datasets that seem to be replicated some other
|
||||
way. (No changes since lastest snapshot. Usefull for
|
||||
way. (No changes since lastest snapshot. Useful for
|
||||
proxmox HA replication)
|
||||
--no-holds Dont lock snapshots on the source. (Usefull to allow
|
||||
--no-holds Don't lock snapshots on the source. (Useful to allow
|
||||
proxmox HA replication to switches nodes)
|
||||
--resume Support resuming of interrupted transfers by using the
|
||||
zfs extensible_dataset feature (both zpools should
|
||||
have it enabled) Disadvantage is that you need to use
|
||||
zfs recv -A if another snapshot is created on the
|
||||
target during a receive. Otherwise it will keep
|
||||
failing.
|
||||
--strip-path STRIP_PATH
|
||||
Number of directory to strip from path (use 1 when
|
||||
cloning zones between 2 SmartOS machines)
|
||||
Number of directories to strip from target path (use 1
|
||||
when cloning zones between 2 SmartOS machines)
|
||||
--clear-refreservation
|
||||
Filter "refreservation" property. (recommended, safes
|
||||
space. same as --filter-properties refreservation)
|
||||
@ -443,7 +504,7 @@ optional arguments:
|
||||
filesystems. (you can still restore them with zfs
|
||||
inherit -S)
|
||||
--set-properties SET_PROPERTIES
|
||||
List of properties to override when receiving
|
||||
List of propererties to override when receiving
|
||||
filesystems. (you can still restore them with zfs
|
||||
inherit -S)
|
||||
--rollback Rollback changes to the latest target snapshot before
|
||||
@ -454,7 +515,7 @@ optional arguments:
|
||||
care! (implies --rollback)
|
||||
--ignore-transfer-errors
|
||||
Ignore transfer errors (still checks if received
|
||||
filesystem exists. usefull for acltype errors)
|
||||
filesystem exists. useful for acltype errors)
|
||||
--raw For encrypted datasets, send data exactly as it exists
|
||||
on disk.
|
||||
--test dont change anything, just show what would be done
|
||||
@ -463,7 +524,8 @@ optional arguments:
|
||||
--debug Show zfs commands that are executed, stops after an
|
||||
exception.
|
||||
--debug-output Show zfs commands and their output/exit codes. (noisy)
|
||||
--progress show zfs progress output (to stderr)
|
||||
--progress show zfs progress output (to stderr). Enabled by
|
||||
default on ttys.
|
||||
|
||||
When a filesystem fails, zfs_backup will continue and report the number of
|
||||
failures at that end. Also the exit code will indicate the number of failures.
|
||||
@ -477,10 +539,7 @@ You forgot to setup automatic login via SSH keys, look in the example how to do
|
||||
|
||||
### It says 'cannot receive incremental stream: invalid backup stream'
|
||||
|
||||
This usually means you've created a new snapshot on the target side during a backup:
|
||||
|
||||
* Solution 1: Restart zfs-autobackup and make sure you don't use --resume. If you did use --resume, be sure to "abort" the receive on the target side with zfs recv -A.
|
||||
* Solution 2: Destroy the newly created snapshot and restart zfs-autobackup.
|
||||
This usually means you've created a new snapshot on the target side during a backup. If you restart zfs-autobackup, it will automaticly abort the invalid partially received snapshot and start over.
|
||||
|
||||
### It says 'internal error: Invalid argument'
|
||||
|
||||
@ -548,12 +607,22 @@ I use the following backup script on the backup server:
|
||||
for H in h4 h5 h6; do
|
||||
echo "################################### DATA $H"
|
||||
#backup data filesystems to a common place
|
||||
./zfs-autobackup --ssh-source root@$H data_smartos03 zones/backup/zfsbackups/pxe1_data --clear-refreservation --clear-mountpoint --ignore-transfer-errors --strip-path 2 --verbose --resume --ignore-replicated --min-change 200000 --no-holds $@
|
||||
./zfs-autobackup --ssh-source root@$H data_smartos03 zones/backup/zfsbackups/pxe1_data --clear-refreservation --clear-mountpoint --ignore-transfer-errors --strip-path 2 --verbose --ignore-replicated --min-change 200000 --no-holds $@
|
||||
zabbix-job-status backup_$H""_data_smartos03 daily $? >/dev/null 2>/dev/null
|
||||
|
||||
echo "################################### RPOOL $H"
|
||||
#backup rpool to own place
|
||||
./zfs-autobackup --ssh-source root@$H $H""_smartos03 zones/backup/zfsbackups/$H --verbose --clear-refreservation --clear-mountpoint --resume --ignore-transfer-errors $@
|
||||
./zfs-autobackup --ssh-source root@$H $H""_smartos03 zones/backup/zfsbackups/$H --verbose --clear-refreservation --clear-mountpoint --ignore-transfer-errors $@
|
||||
zabbix-job-status backup_$H""_smartos03 daily $? >/dev/null 2>/dev/null
|
||||
done
|
||||
```
|
||||
|
||||
# Sponsor list
|
||||
|
||||
This project was sponsorred by:
|
||||
|
||||
* (None so far)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
91
basetest.py
Normal file
91
basetest.py
Normal file
@ -0,0 +1,91 @@
|
||||
|
||||
|
||||
import subprocess
|
||||
import random
|
||||
|
||||
#default test stuff
|
||||
import unittest2
|
||||
import subprocess
|
||||
import time
|
||||
from pprint import *
|
||||
from bin.zfs_autobackup import *
|
||||
from mock import *
|
||||
import contextlib
|
||||
import sys
|
||||
import io
|
||||
|
||||
TEST_POOLS="test_source1 test_source2 test_target1"
|
||||
ZFS_USERSPACE= subprocess.check_output("dpkg-query -W zfsutils-linux |cut -f2", shell=True).decode('utf-8').rstrip()
|
||||
ZFS_KERNEL= subprocess.check_output("modinfo zfs|grep ^version |sed 's/.* //'", shell=True).decode('utf-8').rstrip()
|
||||
|
||||
print("###########################################")
|
||||
print("#### Unit testing against:")
|
||||
print("#### Python :"+sys.version.replace("\n", " "))
|
||||
print("#### ZFS userspace :"+ZFS_USERSPACE)
|
||||
print("#### ZFS kernel :"+ZFS_KERNEL)
|
||||
print("#############################################")
|
||||
|
||||
|
||||
|
||||
# for python2 compatibility
|
||||
if sys.version_info.major==2:
|
||||
OutputIO=io.BytesIO
|
||||
else:
|
||||
OutputIO=io.StringIO
|
||||
|
||||
|
||||
# for python2 compatibility (python 3 has this already)
|
||||
@contextlib.contextmanager
|
||||
def redirect_stdout(target):
|
||||
original = sys.stdout
|
||||
try:
|
||||
sys.stdout = target
|
||||
yield
|
||||
finally:
|
||||
sys.stdout = original
|
||||
|
||||
# for python2 compatibility (python 3 has this already)
|
||||
@contextlib.contextmanager
|
||||
def redirect_stderr(target):
|
||||
original = sys.stderr
|
||||
try:
|
||||
sys.stderr = target
|
||||
yield
|
||||
finally:
|
||||
sys.stderr = original
|
||||
|
||||
|
||||
|
||||
def shelltest(cmd):
|
||||
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
|
||||
ret=(subprocess.check_output(cmd , shell=True).decode('utf-8'))
|
||||
print("######### result of: {}".format(cmd))
|
||||
print(ret)
|
||||
print("#########")
|
||||
ret='\n'+ret
|
||||
return(ret)
|
||||
|
||||
def prepare_zpools():
|
||||
print("Preparing zfs filesystems...")
|
||||
|
||||
#need ram blockdevice
|
||||
subprocess.check_call("modprobe brd rd_size=512000", shell=True)
|
||||
|
||||
#remove old stuff
|
||||
subprocess.call("zpool destroy test_source1 2>/dev/null", shell=True)
|
||||
subprocess.call("zpool destroy test_source2 2>/dev/null", shell=True)
|
||||
subprocess.call("zpool destroy test_target1 2>/dev/null", shell=True)
|
||||
|
||||
#create pools
|
||||
subprocess.check_call("zpool create test_source1 /dev/ram0", shell=True)
|
||||
subprocess.check_call("zpool create test_source2 /dev/ram1", shell=True)
|
||||
subprocess.check_call("zpool create test_target1 /dev/ram2", shell=True)
|
||||
|
||||
#create test structure
|
||||
subprocess.check_call("zfs create -p test_source1/fs1/sub", shell=True)
|
||||
subprocess.check_call("zfs create -p test_source2/fs2/sub", shell=True)
|
||||
subprocess.check_call("zfs create -p test_source2/fs3/sub", shell=True)
|
||||
subprocess.check_call("zfs set autobackup:test=true test_source1/fs1", shell=True)
|
||||
subprocess.check_call("zfs set autobackup:test=child test_source2/fs2", shell=True)
|
||||
|
||||
print("Prepare done")
|
||||
@ -1,7 +1,7 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf8 -*-
|
||||
|
||||
# (c)edwin@datux.nl - Released under GPL
|
||||
# (c)edwin@datux.nl - Released under GPL V3
|
||||
#
|
||||
# Greetings from eth0 2019 :)
|
||||
|
||||
@ -26,8 +26,8 @@ if sys.stdout.isatty():
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
VERSION="3.0-rc9"
|
||||
HEADER="zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)\n".format(VERSION)
|
||||
VERSION="3.0"
|
||||
HEADER="zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)".format(VERSION)
|
||||
|
||||
class Log:
|
||||
def __init__(self, show_debug=False, show_verbose=False):
|
||||
@ -40,6 +40,7 @@ class Log:
|
||||
print(colorama.Fore.RED+colorama.Style.BRIGHT+ "! "+txt+colorama.Style.RESET_ALL, file=sys.stderr)
|
||||
else:
|
||||
print("! "+txt, file=sys.stderr)
|
||||
sys.stderr.flush()
|
||||
|
||||
def verbose(self, txt):
|
||||
if self.show_verbose:
|
||||
@ -47,6 +48,7 @@ class Log:
|
||||
print(colorama.Style.NORMAL+ " "+txt+colorama.Style.RESET_ALL)
|
||||
else:
|
||||
print(" "+txt)
|
||||
sys.stdout.flush()
|
||||
|
||||
def debug(self, txt):
|
||||
if self.show_debug:
|
||||
@ -54,6 +56,7 @@ class Log:
|
||||
print(colorama.Fore.GREEN+ "# "+txt+colorama.Style.RESET_ALL)
|
||||
else:
|
||||
print("# "+txt)
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
|
||||
@ -223,53 +226,6 @@ class Thinner:
|
||||
|
||||
|
||||
|
||||
# ######### Thinner testing code
|
||||
# now=int(time.time())
|
||||
#
|
||||
# t=Thinner("1d1w,1w1m,1m6m,1y2y", always_keep=1)
|
||||
#
|
||||
# import random
|
||||
#
|
||||
# class Thing:
|
||||
# def __init__(self, timestamp):
|
||||
# self.timestamp=timestamp
|
||||
#
|
||||
# def __str__(self):
|
||||
# age=now-self.timestamp
|
||||
# struct=time.localtime(self.timestamp)
|
||||
# return("{} ({} days old)".format(time.strftime("%Y-%m-%d %H:%M:%S",struct),int(age/(3600*24))))
|
||||
#
|
||||
# def test():
|
||||
# global now
|
||||
# things=[]
|
||||
#
|
||||
# while True:
|
||||
# print("#################### {}".format(time.strftime("%Y-%m-%d %H:%M:%S",time.localtime(now))))
|
||||
#
|
||||
# (keeps, removes)=t.run(things, now)
|
||||
#
|
||||
# print ("### KEEP ")
|
||||
# for thing in keeps:
|
||||
# print(thing)
|
||||
#
|
||||
# print ("### REMOVE ")
|
||||
# for thing in removes:
|
||||
# print(thing)
|
||||
#
|
||||
# things=keeps
|
||||
#
|
||||
# #increase random amount of time and maybe add a thing
|
||||
# now=now+random.randint(0,160000)
|
||||
# if random.random()>=0:
|
||||
# things.append(Thing(now))
|
||||
#
|
||||
# sys.stdin.readline()
|
||||
#
|
||||
# test()
|
||||
|
||||
|
||||
|
||||
|
||||
class cached_property(object):
|
||||
""" A property that is only computed once per instance and then replaces
|
||||
itself with an ordinary attribute. Deleting the attribute resets the
|
||||
@ -298,17 +254,28 @@ class cached_property(object):
|
||||
|
||||
return obj._cached_properties[propname]
|
||||
|
||||
class Logger():
|
||||
|
||||
#simple logging stubs
|
||||
def debug(self, txt):
|
||||
print("DEBUG : "+txt)
|
||||
|
||||
def verbose(self, txt):
|
||||
print("VERBOSE: "+txt)
|
||||
|
||||
def error(self, txt):
|
||||
print("ERROR : "+txt)
|
||||
|
||||
|
||||
|
||||
class ExecuteNode:
|
||||
class ExecuteNode(Logger):
|
||||
"""an endpoint to execute local or remote commands via ssh"""
|
||||
|
||||
|
||||
def __init__(self, ssh_config=None, ssh_to=None, readonly=False, debug_output=False):
|
||||
"""ssh_config: custom ssh config
|
||||
ssh_to: server you want to ssh to. none means local
|
||||
readonly: only execute commands that don't make any changes (usefull for testing-runs)
|
||||
readonly: only execute commands that don't make any changes (useful for testing-runs)
|
||||
debug_output: show output and exit codes of commands in debugging output.
|
||||
"""
|
||||
|
||||
@ -346,11 +313,14 @@ class ExecuteNode:
|
||||
|
||||
def run(self, cmd, input=None, tab_split=False, valid_exitcodes=[ 0 ], readonly=False, hide_errors=False, pipe=False, return_stderr=False):
|
||||
"""run a command on the node
|
||||
|
||||
readonly: make this True if the command doesn't make any changes and is safe to execute in testmode
|
||||
pipe: Instead of executing, return a pipe-handle to be used to input to another run() command. (just like a | in linux)
|
||||
cmd: the actual command, should be a list, where the first item is the command and the rest are parameters.
|
||||
input: Can be None, a string or a pipe-handle you got from another run()
|
||||
return_stderr: return both stdout and stderr as a tuple
|
||||
tab_split: split tabbed files in output into a list
|
||||
valid_exitcodes: list of valid exit codes for this command (checks exit code of both sides of a pipe)
|
||||
readonly: make this True if the command doesn't make any changes and is safe to execute in testmode
|
||||
hide_errors: don't show stderr output as error, instead show it as debugging output (use to hide expected errors)
|
||||
pipe: Instead of executing, return a pipe-handle to be used to input to another run() command. (just like a | in linux)
|
||||
return_stderr: return both stdout and stderr as a tuple. (only returns stderr from this side of the pipe)
|
||||
"""
|
||||
|
||||
encoded_cmd=[]
|
||||
@ -368,7 +338,9 @@ class ExecuteNode:
|
||||
#(this is necessary if LC_ALL=en_US.utf8 is not set in the environment)
|
||||
for arg in cmd:
|
||||
#add single quotes for remote commands to support spaces and other weird stuff (remote commands are executed in a shell)
|
||||
encoded_cmd.append( ("'"+arg+"'").encode('utf-8'))
|
||||
#and escape existing single quotes (bash needs ' to end the quoted string, then a \' for the actual quote and then another ' to start a new quoted string)
|
||||
#(and then python needs the double \ to get a single \)
|
||||
encoded_cmd.append( ("'" + arg.replace("'","'\\''") + "'").encode('utf-8'))
|
||||
|
||||
else:
|
||||
for arg in cmd:
|
||||
@ -392,7 +364,8 @@ class ExecuteNode:
|
||||
|
||||
#determine stdin
|
||||
if input==None:
|
||||
stdin=None
|
||||
#NOTE: Not None, otherwise it reads stdin from terminal!
|
||||
stdin=subprocess.PIPE
|
||||
elif isinstance(input,str) or type(input)=='unicode':
|
||||
self.debug("INPUT > \n"+input.rstrip())
|
||||
stdin=subprocess.PIPE
|
||||
@ -411,8 +384,12 @@ class ExecuteNode:
|
||||
|
||||
#Note: make streaming?
|
||||
if isinstance(input,str) or type(input)=='unicode':
|
||||
p.stdin.write(input)
|
||||
p.stdin.write(input.encode('utf-8'))
|
||||
|
||||
if p.stdin:
|
||||
p.stdin.close()
|
||||
|
||||
#return pipe
|
||||
if pipe:
|
||||
return(p)
|
||||
|
||||
@ -459,28 +436,98 @@ class ExecuteNode:
|
||||
if p.poll()!=None and ((not isinstance(input, subprocess.Popen)) or input.poll()!=None) and eof_count==len(selectors):
|
||||
break
|
||||
|
||||
p.stderr.close()
|
||||
p.stdout.close()
|
||||
|
||||
if self.debug_output:
|
||||
self.debug("EXIT > {}".format(p.returncode))
|
||||
|
||||
#handle piped process error output and exit codes
|
||||
if isinstance(input, subprocess.Popen):
|
||||
input.stderr.close()
|
||||
input.stdout.close()
|
||||
|
||||
if self.debug_output:
|
||||
self.debug("EXIT |> {}".format(input.returncode))
|
||||
if valid_exitcodes and input.returncode not in valid_exitcodes:
|
||||
raise(subprocess.CalledProcessError(input.returncode, "(pipe)"))
|
||||
|
||||
|
||||
if valid_exitcodes and p.returncode not in valid_exitcodes:
|
||||
raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
|
||||
|
||||
|
||||
if return_stderr:
|
||||
return ( output_lines, error_lines )
|
||||
else:
|
||||
return(output_lines)
|
||||
|
||||
|
||||
class ZfsPool():
|
||||
"""a zfs pool"""
|
||||
|
||||
def __init__(self, zfs_node, name):
|
||||
"""name: name of the pool
|
||||
"""
|
||||
|
||||
self.zfs_node=zfs_node
|
||||
self.name=name
|
||||
|
||||
def __repr__(self):
|
||||
return("{}: {}".format(self.zfs_node, self.name))
|
||||
|
||||
def __str__(self):
|
||||
return(self.name)
|
||||
|
||||
def __eq__(self, obj):
|
||||
if not isinstance(obj, ZfsPool):
|
||||
return(False)
|
||||
|
||||
return(self.name == obj.name)
|
||||
|
||||
def verbose(self,txt):
|
||||
self.zfs_node.verbose("zpool {}: {}".format(self.name, txt))
|
||||
|
||||
def error(self,txt):
|
||||
self.zfs_node.error("zpool {}: {}".format(self.name, txt))
|
||||
|
||||
def debug(self,txt):
|
||||
self.zfs_node.debug("zpool {}: {}".format(self.name, txt))
|
||||
|
||||
|
||||
@cached_property
|
||||
def properties(self):
|
||||
"""all zpool properties"""
|
||||
|
||||
self.debug("Getting zpool properties")
|
||||
|
||||
cmd=[
|
||||
"zpool", "get", "-H", "-p", "all", self.name
|
||||
]
|
||||
|
||||
|
||||
ret={}
|
||||
|
||||
for pair in self.zfs_node.run(tab_split=True, cmd=cmd, readonly=True, valid_exitcodes=[ 0 ]):
|
||||
if len(pair)==4:
|
||||
ret[pair[1]]=pair[2]
|
||||
|
||||
return(ret)
|
||||
|
||||
@property
|
||||
def features(self):
|
||||
"""get list of active zpool features"""
|
||||
|
||||
ret=[]
|
||||
for (key,value) in self.properties.items():
|
||||
if key.startswith("feature@"):
|
||||
feature=key.split("@")[1]
|
||||
if value=='enabled' or value=='active':
|
||||
ret.append(feature)
|
||||
|
||||
return(ret)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@ -500,7 +547,7 @@ class ZfsDataset():
|
||||
|
||||
def __init__(self, zfs_node, name, force_exists=None):
|
||||
"""name: full path of the zfs dataset
|
||||
exists: specify if you already know a dataset exists or not. for performance reasons. (otherwise it will have to check with zfs list when needed)
|
||||
exists: specify if you already know a dataset exists or not. for performance and testing reasons. (otherwise it will have to check with zfs list when needed)
|
||||
"""
|
||||
self.zfs_node=zfs_node
|
||||
self.name=name #full name
|
||||
@ -622,7 +669,7 @@ class ZfsDataset():
|
||||
@cached_property
|
||||
def exists(self):
|
||||
"""check if dataset exists.
|
||||
Use force to force a specific value to be cached, if you already know. Usefull for performance reasons"""
|
||||
Use force to force a specific value to be cached, if you already know. Useful for performance reasons"""
|
||||
|
||||
|
||||
if self.force_exists!=None:
|
||||
@ -653,10 +700,11 @@ class ZfsDataset():
|
||||
|
||||
self.verbose("Destroying")
|
||||
|
||||
if self.is_snapshot:
|
||||
self.release()
|
||||
|
||||
try:
|
||||
self.zfs_node.run(["zfs", "destroy", "-d", self.name])
|
||||
self.zfs_node.run(["zfs", "destroy", self.name])
|
||||
self.invalidate()
|
||||
self.force_exists=False
|
||||
return(True)
|
||||
@ -851,9 +899,9 @@ class ZfsDataset():
|
||||
|
||||
@cached_property
|
||||
def recursive_datasets(self, types="filesystem,volume"):
|
||||
"""get all datasets recursively under us"""
|
||||
"""get all (non-snapshot) datasets recursively under us"""
|
||||
|
||||
self.debug("Getting all datasets under us")
|
||||
self.debug("Getting all recursive datasets under us")
|
||||
|
||||
names=self.zfs_node.run(tab_split=False, readonly=True, valid_exitcodes=[ 0 ], cmd=[
|
||||
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", self.name
|
||||
@ -862,10 +910,22 @@ class ZfsDataset():
|
||||
return(self.from_names(names[1:]))
|
||||
|
||||
|
||||
def send_pipe(self, prev_snapshot=None, resume=True, resume_token=None, show_progress=False, raw=False):
|
||||
@cached_property
|
||||
def datasets(self, types="filesystem,volume"):
|
||||
"""get all (non-snapshot) datasets directly under us"""
|
||||
|
||||
self.debug("Getting all datasets under us")
|
||||
|
||||
names=self.zfs_node.run(tab_split=False, readonly=True, valid_exitcodes=[ 0 ], cmd=[
|
||||
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", "-d", "1", self.name
|
||||
])
|
||||
|
||||
return(self.from_names(names[1:]))
|
||||
|
||||
|
||||
def send_pipe(self, features, prev_snapshot=None, resume_token=None, show_progress=False, raw=False):
|
||||
"""returns a pipe with zfs send output for this snapshot
|
||||
|
||||
resume: Use resuming (both sides need to support it)
|
||||
resume_token: resume sending from this token. (in that case we don't need to know snapshot names)
|
||||
|
||||
"""
|
||||
@ -875,11 +935,20 @@ class ZfsDataset():
|
||||
cmd.extend(["zfs", "send", ])
|
||||
|
||||
#all kind of performance options:
|
||||
cmd.append("-L") # large block support
|
||||
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
|
||||
cmd.append("-L") # large block support (only if recordsize>128k which is seldomly used)
|
||||
|
||||
if 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
|
||||
cmd.append("-e") # WRITE_EMBEDDED, more compact stream
|
||||
|
||||
if "-c" in self.zfs_node.supported_send_options:
|
||||
cmd.append("-c") # use compressed WRITE records
|
||||
if not resume:
|
||||
cmd.append("-D") # dedupped stream, sends less duplicate data
|
||||
|
||||
#NOTE: performance is usually worse with this option, according to manual
|
||||
#also -D will be depricated in newer ZFS versions
|
||||
# if not resume:
|
||||
# if "-D" in self.zfs_node.supported_send_options:
|
||||
# cmd.append("-D") # dedupped stream, sends less duplicate data
|
||||
|
||||
#raw? (for encryption)
|
||||
if raw:
|
||||
@ -914,11 +983,11 @@ class ZfsDataset():
|
||||
return(self.zfs_node.run(cmd, pipe=True))
|
||||
|
||||
|
||||
def recv_pipe(self, pipe, resume=True, filter_properties=[], set_properties=[], ignore_exit_code=False):
|
||||
def recv_pipe(self, pipe, features, filter_properties=[], set_properties=[], ignore_exit_code=False):
|
||||
"""starts a zfs recv for this snapshot and uses pipe as input
|
||||
|
||||
note: you can also call both a snapshot and filesystem object.
|
||||
the resulting zfs command is the same, only our object cache is invalidated differently.
|
||||
note: you can it both on a snapshot or filesystem object.
|
||||
The resulting zfs command is the same, only our object cache is invalidated differently.
|
||||
"""
|
||||
#### build target command
|
||||
cmd=[]
|
||||
@ -937,8 +1006,9 @@ class ZfsDataset():
|
||||
#verbose output
|
||||
cmd.append("-v")
|
||||
|
||||
if resume:
|
||||
if 'extensible_dataset' in features and "-s" in self.zfs_node.supported_recv_options:
|
||||
#support resuming
|
||||
self.debug("Enabled resume support")
|
||||
cmd.append("-s")
|
||||
|
||||
cmd.append(self.filesystem_name)
|
||||
@ -967,7 +1037,7 @@ class ZfsDataset():
|
||||
# cmd.append("|mbuffer -m {}".format(args.buffer))
|
||||
|
||||
|
||||
def transfer_snapshot(self, target_snapshot, prev_snapshot=None, resume=True, show_progress=False, filter_properties=[], set_properties=[], ignore_recv_exit_code=False, resume_token=None, raw=False):
|
||||
def transfer_snapshot(self, target_snapshot, features, prev_snapshot=None, show_progress=False, filter_properties=[], set_properties=[], ignore_recv_exit_code=False, resume_token=None, raw=False):
|
||||
"""transfer this snapshot to target_snapshot. specify prev_snapshot for incremental transfer
|
||||
|
||||
connects a send_pipe() to recv_pipe()
|
||||
@ -986,8 +1056,8 @@ class ZfsDataset():
|
||||
target_snapshot.verbose("receiving incremental".format(self.snapshot_name))
|
||||
|
||||
#do it
|
||||
pipe=self.send_pipe(resume=resume, show_progress=show_progress, prev_snapshot=prev_snapshot, resume_token=resume_token, raw=raw)
|
||||
target_snapshot.recv_pipe(pipe, resume=resume, filter_properties=filter_properties, set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
|
||||
pipe=self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot, resume_token=resume_token, raw=raw)
|
||||
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties, set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code)
|
||||
|
||||
def abort_resume(self):
|
||||
"""abort current resume state"""
|
||||
@ -1024,9 +1094,7 @@ class ZfsDataset():
|
||||
return(None)
|
||||
|
||||
|
||||
|
||||
|
||||
def thin(self, keeps=[], ignores=[]):
|
||||
def thin_list(self, keeps=[], ignores=[]):
|
||||
"""determines list of snapshots that should be kept or deleted based on the thinning schedule. cull the herd!
|
||||
keep: list of snapshots to always keep (usually the last)
|
||||
ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
|
||||
@ -1039,6 +1107,18 @@ class ZfsDataset():
|
||||
return(self.zfs_node.thinner.thin(snapshots, keep_objects=keeps))
|
||||
|
||||
|
||||
def thin(self, skip_holds=False):
|
||||
"""destroys snapshots according to thin_list, except last snapshot"""
|
||||
|
||||
(keeps, obsoletes)=self.thin_list(keeps=self.our_snapshots[-1:])
|
||||
for obsolete in obsoletes:
|
||||
if skip_holds and obsolete.is_hold():
|
||||
obsolete.verbose("Keeping (common snapshot)")
|
||||
else:
|
||||
obsolete.destroy()
|
||||
self.snapshots.remove(obsolete)
|
||||
|
||||
|
||||
def find_common_snapshot(self, target_dataset):
|
||||
"""find latest common snapshot between us and target
|
||||
returns None if its an initial transfer
|
||||
@ -1114,7 +1194,7 @@ class ZfsDataset():
|
||||
|
||||
|
||||
|
||||
def sync_snapshots(self, target_dataset, show_progress=False, resume=True, filter_properties=[], set_properties=[], ignore_recv_exit_code=False, source_holds=True, rollback=False, raw=False, other_snapshots=False, no_send=False, destroy_incompatible=False):
|
||||
def sync_snapshots(self, target_dataset, features, show_progress=False, filter_properties=[], set_properties=[], ignore_recv_exit_code=False, source_holds=True, rollback=False, raw=False, other_snapshots=False, no_send=False, destroy_incompatible=False):
|
||||
"""sync this dataset's snapshots to target_dataset, while also thinning out old snapshots along the way."""
|
||||
|
||||
#determine common and start snapshot
|
||||
@ -1137,13 +1217,13 @@ class ZfsDataset():
|
||||
#now let thinner decide what we want on both sides as final state (after all transfers are done)
|
||||
self.debug("Create thinning list")
|
||||
if self.our_snapshots:
|
||||
(source_keeps, source_obsoletes)=self.thin(keeps=[self.our_snapshots[-1]])
|
||||
(source_keeps, source_obsoletes)=self.thin_list(keeps=[self.our_snapshots[-1]])
|
||||
else:
|
||||
source_keeps=[]
|
||||
source_obsoletes=[]
|
||||
|
||||
if target_dataset.our_snapshots:
|
||||
(target_keeps, target_obsoletes)=target_dataset.thin(keeps=[target_dataset.our_snapshots[-1]], ignores=incompatible_target_snapshots)
|
||||
(target_keeps, target_obsoletes)=target_dataset.thin_list(keeps=[target_dataset.our_snapshots[-1]], ignores=incompatible_target_snapshots)
|
||||
else:
|
||||
target_keeps=[]
|
||||
target_obsoletes=[]
|
||||
@ -1212,7 +1292,7 @@ class ZfsDataset():
|
||||
#does target actually want it?
|
||||
if target_snapshot not in target_obsoletes:
|
||||
( allowed_filter_properties, allowed_set_properties ) = self.get_allowed_properties(filter_properties, set_properties) #NOTE: should we let transfer_snapshot handle this?
|
||||
source_snapshot.transfer_snapshot(target_snapshot, prev_snapshot=prev_source_snapshot, show_progress=show_progress, resume=resume, filter_properties=allowed_filter_properties, set_properties=allowed_set_properties, ignore_recv_exit_code=ignore_recv_exit_code, resume_token=resume_token, raw=raw)
|
||||
source_snapshot.transfer_snapshot(target_snapshot, features=features, prev_snapshot=prev_source_snapshot, show_progress=show_progress, filter_properties=allowed_filter_properties, set_properties=allowed_set_properties, ignore_recv_exit_code=ignore_recv_exit_code, resume_token=resume_token, raw=raw)
|
||||
resume_token=None
|
||||
|
||||
#hold the new common snapshots and release the previous ones
|
||||
@ -1229,7 +1309,7 @@ class ZfsDataset():
|
||||
prev_source_snapshot.destroy()
|
||||
|
||||
# destroy the previous target snapshot if obsolete (usually this is only the common_snapshot, the rest was already destroyed or will not be send)
|
||||
prev_target_snapshot=target_dataset.find_snapshot(common_snapshot)
|
||||
prev_target_snapshot=target_dataset.find_snapshot(prev_source_snapshot)
|
||||
if prev_target_snapshot in target_obsoletes:
|
||||
prev_target_snapshot.destroy()
|
||||
|
||||
@ -1251,14 +1331,14 @@ class ZfsDataset():
|
||||
class ZfsNode(ExecuteNode):
|
||||
"""a node that contains zfs datasets. implements global (systemwide/pool wide) zfs commands"""
|
||||
|
||||
def __init__(self, backup_name, zfs_autobackup, ssh_config=None, ssh_to=None, readonly=False, description="", debug_output=False, thinner=Thinner()):
|
||||
def __init__(self, backup_name, logger, ssh_config=None, ssh_to=None, readonly=False, description="", debug_output=False, thinner=Thinner()):
|
||||
self.backup_name=backup_name
|
||||
if not description:
|
||||
if not description and ssh_to:
|
||||
self.description=ssh_to
|
||||
else:
|
||||
self.description=description
|
||||
|
||||
self.zfs_autobackup=zfs_autobackup #for logging
|
||||
self.logger=logger
|
||||
|
||||
if ssh_config:
|
||||
self.verbose("Using custom SSH config: {}".format(ssh_config))
|
||||
@ -1277,10 +1357,53 @@ class ZfsNode(ExecuteNode):
|
||||
|
||||
self.thinner=thinner
|
||||
|
||||
#list of ZfsPools
|
||||
self.__pools={}
|
||||
|
||||
ExecuteNode.__init__(self, ssh_config=ssh_config, ssh_to=ssh_to, readonly=readonly, debug_output=debug_output)
|
||||
|
||||
|
||||
@cached_property
|
||||
def supported_send_options(self):
|
||||
"""list of supported options, for optimizing sends"""
|
||||
#not every zfs implementation supports them all
|
||||
|
||||
ret=[]
|
||||
for option in ["-L", "-e", "-c" ]:
|
||||
if self.valid_command(["zfs","send", option, "zfs_autobackup_option_test"]):
|
||||
ret.append(option)
|
||||
return(ret)
|
||||
|
||||
@cached_property
|
||||
def supported_recv_options(self):
|
||||
"""list of supported options"""
|
||||
#not every zfs implementation supports them all
|
||||
|
||||
ret=[]
|
||||
for option in ["-s" ]:
|
||||
if self.valid_command(["zfs","recv", option, "zfs_autobackup_option_test"]):
|
||||
ret.append(option)
|
||||
return(ret)
|
||||
|
||||
|
||||
def valid_command(self, cmd):
|
||||
"""test if a specified zfs options are valid exit code. use this to determine support options"""
|
||||
|
||||
try:
|
||||
self.run(cmd, hide_errors=True, valid_exitcodes=[0,1])
|
||||
except subprocess.CalledProcessError as e:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
#TODO: also create a get_zfs_dataset() function that stores all the objects in a dict. This should optimize caching a bit and is more consistent.
|
||||
def get_zfs_pool(self, name):
|
||||
"""get a ZfsPool() object from specified name. stores objects internally to enable caching"""
|
||||
|
||||
return(self.__pools.setdefault(name, ZfsPool(self, name)))
|
||||
|
||||
|
||||
def reset_progress(self):
|
||||
"""reset progress output counters"""
|
||||
self._progress_total_bytes=0
|
||||
@ -1302,7 +1425,7 @@ class ZfsNode(ExecuteNode):
|
||||
#always output for debugging offcourse
|
||||
self.debug(prefix+line.rstrip())
|
||||
|
||||
#actual usefull info
|
||||
#actual useful info
|
||||
if len(progress_fields)>=3:
|
||||
if progress_fields[0]=='full' or progress_fields[0]=='size':
|
||||
self._progress_total_bytes=int(progress_fields[2])
|
||||
@ -1317,8 +1440,8 @@ class ZfsNode(ExecuteNode):
|
||||
bytes_left=self._progress_total_bytes-bytes
|
||||
minutes_left=int((bytes_left/(bytes/(time.time()-self._progress_start_time)))/60)
|
||||
|
||||
print(">>> {}% {}MB/s (total {}MB, {} minutes left) \r".format(percentage, speed, int(self._progress_total_bytes/(1024*1024)), minutes_left), end='')
|
||||
sys.stdout.flush()
|
||||
print(">>> {}% {}MB/s (total {}MB, {} minutes left) \r".format(percentage, speed, int(self._progress_total_bytes/(1024*1024)), minutes_left), end='', file=sys.stderr)
|
||||
sys.stderr.flush()
|
||||
|
||||
return
|
||||
|
||||
@ -1336,13 +1459,13 @@ class ZfsNode(ExecuteNode):
|
||||
self.parse_zfs_progress(line, hide_errors, "STDERR > ")
|
||||
|
||||
def verbose(self,txt):
|
||||
self.zfs_autobackup.verbose("{} {}".format(self.description, txt))
|
||||
self.logger.verbose("{} {}".format(self.description, txt))
|
||||
|
||||
def error(self,txt,titles=[]):
|
||||
self.zfs_autobackup.error("{} {}".format(self.description, txt))
|
||||
self.logger.error("{} {}".format(self.description, txt))
|
||||
|
||||
def debug(self,txt, titles=[]):
|
||||
self.zfs_autobackup.debug("{} {}".format(self.description, txt))
|
||||
self.logger.debug("{} {}".format(self.description, txt))
|
||||
|
||||
def new_snapshotname(self):
|
||||
"""determine uniq new snapshotname"""
|
||||
@ -1370,7 +1493,7 @@ class ZfsNode(ExecuteNode):
|
||||
|
||||
pools[pool].append(snapshot)
|
||||
|
||||
#add snapshot to cache (also usefull in testmode)
|
||||
#add snapshot to cache (also useful in testmode)
|
||||
dataset.snapshots.append(snapshot) #NOTE: this will trigger zfs list
|
||||
|
||||
if not pools:
|
||||
@ -1394,6 +1517,9 @@ class ZfsNode(ExecuteNode):
|
||||
|
||||
returns: list of ZfsDataset
|
||||
"""
|
||||
|
||||
self.debug("Getting selected datasets")
|
||||
|
||||
#get all source filesystems that have the backup property
|
||||
lines=self.run(tab_split=True, readonly=True, cmd=[
|
||||
"zfs", "get", "-t", "volume,filesystem", "-o", "name,value,source", "-s", "local,inherited", "-H", "autobackup:"+self.backup_name
|
||||
@ -1422,21 +1548,16 @@ class ZfsNode(ExecuteNode):
|
||||
selected_filesystems.append(dataset)
|
||||
dataset.verbose("Selected (inherited selection)")
|
||||
else:
|
||||
dataset.verbose("Ignored (already a backup)")
|
||||
dataset.debug("Ignored (already a backup)")
|
||||
else:
|
||||
dataset.verbose("Ignored (only childs)")
|
||||
|
||||
return(selected_filesystems)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
class ZfsAutobackup:
|
||||
"""main class"""
|
||||
def __init__(self):
|
||||
def __init__(self,argv):
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description=HEADER,
|
||||
@ -1447,32 +1568,32 @@ class ZfsAutobackup:
|
||||
parser.add_argument('--keep-source', type=str, default="10,1d1w,1w1m,1m1y", help='Thinning schedule for old source snapshots. Default: %(default)s')
|
||||
parser.add_argument('--keep-target', type=str, default="10,1d1w,1w1m,1m1y", help='Thinning schedule for old target snapshots. Default: %(default)s')
|
||||
|
||||
parser.add_argument('backup_name', help='Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup')
|
||||
parser.add_argument('target_path', help='Target ZFS filesystem')
|
||||
parser.add_argument('backup_name', metavar='backup-name', help='Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup')
|
||||
parser.add_argument('target_path', metavar='target-path', default=None, nargs='?', help='Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate as snapshot-tool on source)')
|
||||
|
||||
parser.add_argument('--other-snapshots', action='store_true', help='Send over other snapshots as well, not just the ones created by this tool.')
|
||||
parser.add_argument('--no-snapshot', action='store_true', help='Don\'t create new snapshots (usefull for finishing uncompleted backups, or cleanups)')
|
||||
parser.add_argument('--no-send', action='store_true', help='Don\'t send snapshots (usefull for cleanups, or if you want a serperate send-cronjob)')
|
||||
parser.add_argument('--no-snapshot', action='store_true', help='Don\'t create new snapshots (useful for finishing uncompleted backups, or cleanups)')
|
||||
parser.add_argument('--no-send', action='store_true', help='Don\'t send snapshots (useful for cleanups, or if you want a serperate send-cronjob)')
|
||||
parser.add_argument('--min-change', type=int, default=1, help='Number of bytes written after which we consider a dataset changed (default %(default)s)')
|
||||
parser.add_argument('--allow-empty', action='store_true', help='If nothing has changed, still create empty snapshots. (same as --min-change=0)')
|
||||
parser.add_argument('--ignore-replicated', action='store_true', help='Ignore datasets that seem to be replicated some other way. (No changes since lastest snapshot. Usefull for proxmox HA replication)')
|
||||
parser.add_argument('--no-holds', action='store_true', help='Don\'t lock snapshots on the source. (Usefull to allow proxmox HA replication to switches nodes)')
|
||||
#not sure if this ever was usefull:
|
||||
parser.add_argument('--ignore-replicated', action='store_true', help='Ignore datasets that seem to be replicated some other way. (No changes since lastest snapshot. Useful for proxmox HA replication)')
|
||||
parser.add_argument('--no-holds', action='store_true', help='Don\'t lock snapshots on the source. (Useful to allow proxmox HA replication to switches nodes)')
|
||||
#not sure if this ever was useful:
|
||||
# parser.add_argument('--ignore-new', action='store_true', help='Ignore filesystem if there are already newer snapshots for it on the target (use with caution)')
|
||||
|
||||
parser.add_argument('--resume', action='store_true', help='Support resuming of interrupted transfers by using the zfs extensible_dataset feature (both zpools should have it enabled) Disadvantage is that you need to use zfs recv -A if another snapshot is created on the target during a receive. Otherwise it will keep failing.')
|
||||
parser.add_argument('--strip-path', default=0, type=int, help='Number of directory to strip from path (use 1 when cloning zones between 2 SmartOS machines)')
|
||||
parser.add_argument('--resume', action='store_true', help=argparse.SUPPRESS)
|
||||
parser.add_argument('--strip-path', default=0, type=int, help='Number of directories to strip from target path (use 1 when cloning zones between 2 SmartOS machines)')
|
||||
# parser.add_argument('--buffer', default="", help='Use mbuffer with specified size to speedup zfs transfer. (e.g. --buffer 1G) Will also show nice progress output.')
|
||||
|
||||
|
||||
# parser.add_argument('--destroy-stale', action='store_true', help='Destroy stale backups that have no more snapshots. Be sure to verify the output before using this! ')
|
||||
parser.add_argument('--clear-refreservation', action='store_true', help='Filter "refreservation" property. (recommended, safes space. same as --filter-properties refreservation)')
|
||||
parser.add_argument('--clear-mountpoint', action='store_true', help='Set property canmount=noauto for new datasets. (recommended, prevents mount conflicts. same as --set-properties canmount=noauto)')
|
||||
parser.add_argument('--filter-properties', type=str, help='List of properties to "filter" when receiving filesystems. (you can still restore them with zfs inherit -S)')
|
||||
parser.add_argument('--set-properties', type=str, help='List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S)')
|
||||
parser.add_argument('--rollback', action='store_true', help='Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on)')
|
||||
parser.add_argument('--destroy-incompatible', action='store_true', help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
|
||||
parser.add_argument('--ignore-transfer-errors', action='store_true', help='Ignore transfer errors (still checks if received filesystem exists. usefull for acltype errors)')
|
||||
parser.add_argument('--destroy-missing', type=str, default=None, help='Destroy datasets on target that are missing on the source. Specify the time since the last snapshot, e.g: --destroy-missing 30d')
|
||||
parser.add_argument('--ignore-transfer-errors', action='store_true', help='Ignore transfer errors (still checks if received filesystem exists. useful for acltype errors)')
|
||||
parser.add_argument('--raw', action='store_true', help='For encrypted datasets, send data exactly as it exists on disk.')
|
||||
|
||||
|
||||
@ -1480,13 +1601,16 @@ class ZfsAutobackup:
|
||||
parser.add_argument('--verbose', action='store_true', help='verbose output')
|
||||
parser.add_argument('--debug', action='store_true', help='Show zfs commands that are executed, stops after an exception.')
|
||||
parser.add_argument('--debug-output', action='store_true', help='Show zfs commands and their output/exit codes. (noisy)')
|
||||
parser.add_argument('--progress', action='store_true', help='show zfs progress output (to stderr)')
|
||||
parser.add_argument('--progress', action='store_true', help='show zfs progress output (to stderr). Enabled by default on ttys.')
|
||||
|
||||
#note args is the only global variable we use, since its a global readonly setting anyway
|
||||
args = parser.parse_args()
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
self.args=args
|
||||
|
||||
if sys.stderr.isatty():
|
||||
args.progress=True
|
||||
|
||||
if args.debug_output:
|
||||
args.debug=True
|
||||
|
||||
@ -1501,6 +1625,9 @@ class ZfsAutobackup:
|
||||
|
||||
self.log=Log(show_debug=self.args.debug, show_verbose=self.args.verbose)
|
||||
|
||||
if args.resume:
|
||||
self.verbose("NOTE: The --resume option isn't needed anymore (its autodetected now)")
|
||||
|
||||
|
||||
def verbose(self,txt,titles=[]):
|
||||
self.log.verbose(txt)
|
||||
@ -1515,58 +1642,30 @@ class ZfsAutobackup:
|
||||
self.log.verbose("")
|
||||
self.log.verbose("#### "+title)
|
||||
|
||||
def run(self):
|
||||
|
||||
self.verbose (HEADER)
|
||||
|
||||
if self.args.test:
|
||||
self.verbose("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
|
||||
|
||||
self.set_title("Settings summary")
|
||||
|
||||
description="[Source]"
|
||||
source_thinner=Thinner(self.args.keep_source)
|
||||
source_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_source, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=source_thinner)
|
||||
source_node.verbose("Send all datasets that have 'autobackup:{}=true' or 'autobackup:{}=child'".format(self.args.backup_name, self.args.backup_name))
|
||||
|
||||
self.verbose("")
|
||||
# sync datasets, or thin-only on both sides
|
||||
# target is needed for this.
|
||||
def sync_datasets(self, source_node, source_datasets):
|
||||
|
||||
description="[Target]"
|
||||
|
||||
self.set_title("Target settings")
|
||||
|
||||
target_thinner=Thinner(self.args.keep_target)
|
||||
target_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_target, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=target_thinner)
|
||||
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
||||
|
||||
self.set_title("Selecting")
|
||||
selected_source_datasets=source_node.selected_datasets
|
||||
if not selected_source_datasets:
|
||||
self.error("No source filesystems selected, please do a 'zfs set autobackup:{0}=true' on the source datasets you want to backup.".format(self.args.backup_name))
|
||||
return(255)
|
||||
|
||||
source_datasets=[]
|
||||
|
||||
#filter out already replicated stuff?
|
||||
if not self.args.ignore_replicated:
|
||||
source_datasets=selected_source_datasets
|
||||
else:
|
||||
self.set_title("Filtering already replicated filesystems")
|
||||
for selected_source_dataset in selected_source_datasets:
|
||||
if selected_source_dataset.is_changed(self.args.min_change):
|
||||
source_datasets.append(selected_source_dataset)
|
||||
else:
|
||||
selected_source_dataset.verbose("Ignoring, already replicated")
|
||||
|
||||
|
||||
if not self.args.no_snapshot:
|
||||
self.set_title("Snapshotting")
|
||||
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), min_changed_bytes=self.args.min_change)
|
||||
|
||||
|
||||
|
||||
if self.args.no_send:
|
||||
self.set_title("Thinning")
|
||||
self.set_title("Thinning source and target")
|
||||
else:
|
||||
self.set_title("Sending and thinning")
|
||||
|
||||
#check if exists, to prevent vague errors
|
||||
target_dataset=ZfsDataset(target_node, self.args.target_path)
|
||||
if not target_dataset.exists:
|
||||
self.error("Target path '{}' does not exist. Please create this dataset first.".format(target_dataset))
|
||||
return(255)
|
||||
|
||||
|
||||
if self.args.filter_properties:
|
||||
filter_properties=self.args.filter_properties.split(",")
|
||||
else:
|
||||
@ -1583,40 +1682,181 @@ class ZfsAutobackup:
|
||||
if self.args.clear_mountpoint:
|
||||
set_properties.append("canmount=noauto")
|
||||
|
||||
#sync datasets
|
||||
fail_count=0
|
||||
target_datasets=[]
|
||||
for source_dataset in source_datasets:
|
||||
|
||||
try:
|
||||
#determine corresponding target_dataset
|
||||
target_name=self.args.target_path + "/" + source_dataset.lstrip_path(self.args.strip_path)
|
||||
target_dataset=ZfsDataset(target_node, target_name)
|
||||
target_datasets.append(target_dataset)
|
||||
|
||||
#ensure parents exists
|
||||
if not self.args.no_send and not target_dataset.parent.exists:
|
||||
#TODO: this isnt perfect yet, in some cases it can create parents when it shouldn't.
|
||||
if not self.args.no_send and not target_dataset.parent in target_datasets and not target_dataset.parent.exists:
|
||||
target_dataset.parent.create_filesystem(parents=True)
|
||||
|
||||
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress, resume=self.args.resume, filter_properties=filter_properties, set_properties=set_properties, ignore_recv_exit_code=self.args.ignore_transfer_errors, source_holds= not self.args.no_holds, rollback=self.args.rollback, raw=self.args.raw, other_snapshots=self.args.other_snapshots, no_send=self.args.no_send, destroy_incompatible=self.args.destroy_incompatible)
|
||||
#determine common zpool features
|
||||
source_features=source_node.get_zfs_pool(source_dataset.split_path()[0]).features
|
||||
target_features=target_node.get_zfs_pool(target_dataset.split_path()[0]).features
|
||||
common_features=source_features and target_features
|
||||
# source_dataset.debug("Common features: {}".format(common_features))
|
||||
|
||||
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress, features=common_features, filter_properties=filter_properties, set_properties=set_properties, ignore_recv_exit_code=self.args.ignore_transfer_errors, source_holds= not self.args.no_holds, rollback=self.args.rollback, raw=self.args.raw, other_snapshots=self.args.other_snapshots, no_send=self.args.no_send, destroy_incompatible=self.args.destroy_incompatible)
|
||||
except Exception as e:
|
||||
fail_count=fail_count+1
|
||||
self.error("DATASET FAILED: "+str(e))
|
||||
source_dataset.error("FAILED: "+str(e))
|
||||
if self.args.debug:
|
||||
raise
|
||||
|
||||
self.thin_missing_targets(ZfsDataset(target_node, self.args.target_path), target_datasets)
|
||||
|
||||
|
||||
return(fail_count)
|
||||
|
||||
|
||||
def thin_missing_targets(self, target_dataset, used_target_datasets):
|
||||
"""thin/destroy target datasets that are missing on the source."""
|
||||
|
||||
self.debug("Thinning obsolete datasets")
|
||||
|
||||
for dataset in target_dataset.recursive_datasets:
|
||||
try:
|
||||
if dataset not in used_target_datasets:
|
||||
dataset.debug("Missing on source, thinning")
|
||||
dataset.thin()
|
||||
|
||||
#destroy_missing enabled?
|
||||
if self.args.destroy_missing!=None:
|
||||
|
||||
#cant do anything without our own snapshots
|
||||
if not dataset.our_snapshots:
|
||||
if dataset.datasets:
|
||||
dataset.debug("Destroy missing: ignoring")
|
||||
else:
|
||||
dataset.verbose("Destroy missing: has no snapshots made by us. (please destroy manually)")
|
||||
else:
|
||||
#past the deadline?
|
||||
deadline_ttl=ThinnerRule("0s"+self.args.destroy_missing).ttl
|
||||
now=int(time.time())
|
||||
if dataset.our_snapshots[-1].timestamp + deadline_ttl > now:
|
||||
dataset.verbose("Destroy missing: Waiting for deadline.")
|
||||
else:
|
||||
|
||||
dataset.debug("Destroy missing: Removing our snapshots.")
|
||||
|
||||
#remove all our snaphots, except last, to safe space in case we fail later on
|
||||
for snapshot in dataset.our_snapshots[:-1]:
|
||||
snapshot.destroy(fail_exception=True)
|
||||
|
||||
#does it have other snapshots?
|
||||
has_others=False
|
||||
for snapshot in dataset.snapshots:
|
||||
if not snapshot.is_ours():
|
||||
has_others=True
|
||||
break
|
||||
|
||||
if has_others:
|
||||
dataset.verbose("Destroy missing: Still in use by other snapshots")
|
||||
else:
|
||||
if dataset.datasets:
|
||||
dataset.verbose("Destroy missing: Still has children here.")
|
||||
else:
|
||||
dataset.verbose("Destroy missing.")
|
||||
dataset.our_snapshots[-1].destroy(fail_exception=True)
|
||||
dataset.destroy(fail_exception=True)
|
||||
|
||||
except Exception as e:
|
||||
dataset.error("Error during destoy missing ({})".format(str(e)))
|
||||
|
||||
|
||||
|
||||
|
||||
def thin_source(self, source_datasets):
|
||||
|
||||
self.set_title("Thinning source")
|
||||
|
||||
for source_dataset in source_datasets:
|
||||
source_dataset.thin(skip_holds=True)
|
||||
|
||||
|
||||
def run(self):
|
||||
|
||||
try:
|
||||
self.verbose (HEADER)
|
||||
|
||||
if self.args.test:
|
||||
self.verbose("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
|
||||
|
||||
self.set_title("Source settings")
|
||||
|
||||
description="[Source]"
|
||||
source_thinner=Thinner(self.args.keep_source)
|
||||
source_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_source, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=source_thinner)
|
||||
source_node.verbose("Selects all datasets that have property 'autobackup:{}=true' (or childs of datasets that have 'autobackup:{}=child')".format(self.args.backup_name, self.args.backup_name))
|
||||
|
||||
self.set_title("Selecting")
|
||||
selected_source_datasets=source_node.selected_datasets
|
||||
if not selected_source_datasets:
|
||||
self.error("No source filesystems selected, please do a 'zfs set autobackup:{0}=true' on the source datasets you want to select.".format(self.args.backup_name))
|
||||
return(255)
|
||||
|
||||
source_datasets=[]
|
||||
|
||||
#filter out already replicated stuff?
|
||||
if not self.args.ignore_replicated:
|
||||
source_datasets=selected_source_datasets
|
||||
else:
|
||||
self.set_title("Filtering already replicated filesystems")
|
||||
for selected_source_dataset in selected_source_datasets:
|
||||
if selected_source_dataset.is_changed(self.args.min_change):
|
||||
source_datasets.append(selected_source_dataset)
|
||||
else:
|
||||
selected_source_dataset.verbose("Ignoring, already replicated")
|
||||
|
||||
if not self.args.no_snapshot:
|
||||
self.set_title("Snapshotting")
|
||||
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), min_changed_bytes=self.args.min_change)
|
||||
|
||||
#if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode)
|
||||
if self.args.target_path:
|
||||
fail_count=self.sync_datasets(source_node, source_datasets)
|
||||
else:
|
||||
self.thin_source(source_datasets)
|
||||
fail_count=0
|
||||
|
||||
|
||||
if not fail_count:
|
||||
if self.args.test:
|
||||
self.set_title("All tests successfull.")
|
||||
else:
|
||||
self.set_title("All backups completed successfully")
|
||||
self.set_title("All operations completed successfully")
|
||||
if not self.args.target_path:
|
||||
self.verbose("(No target_path specified, only operated as snapshot tool.)")
|
||||
|
||||
else:
|
||||
self.error("{} datasets failed!".format(fail_count))
|
||||
if fail_count!=255:
|
||||
self.error("{} failures!".format(fail_count))
|
||||
|
||||
|
||||
if self.args.test:
|
||||
self.verbose("TEST MODE - DID NOT MAKE ANY BACKUPS!")
|
||||
self.verbose("")
|
||||
self.verbose("TEST MODE - DID NOT MAKE ANY CHANGES!")
|
||||
|
||||
return(fail_count)
|
||||
|
||||
except Exception as e:
|
||||
self.error("Exception: "+str(e))
|
||||
if self.args.debug:
|
||||
raise
|
||||
return(255)
|
||||
except KeyboardInterrupt as e:
|
||||
self.error("Aborted")
|
||||
return(255)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
zfs_autobackup=ZfsAutobackup()
|
||||
zfs_autobackup=ZfsAutobackup(sys.argv[1:])
|
||||
sys.exit(zfs_autobackup.run())
|
||||
|
||||
17
ngrok.sh
Executable file
17
ngrok.sh
Executable file
@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
if ! [ -e ngrok ]; then
|
||||
wget -O ngrok.zip https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
|
||||
unzip ngrok.zip
|
||||
fi
|
||||
{
|
||||
mkfifo pipe
|
||||
echo "Executing nc"
|
||||
nc -k -l -v 8888 <pipe | ( while true; do bash >pipe 2>&1; echo "restarting" ;sleep 1; done )
|
||||
killall -SIGINT ngrok && echo "ngrok terminated"
|
||||
} &
|
||||
{
|
||||
echo "Executing ngrok"
|
||||
./ngrok authtoken $NGROK_TOKEN
|
||||
./ngrok tcp 8888 --log=stdout
|
||||
} &
|
||||
wait
|
||||
6
requirements.txt
Normal file
6
requirements.txt
Normal file
@ -0,0 +1,6 @@
|
||||
colorama
|
||||
argparse
|
||||
coverage==4.5.4
|
||||
python-coveralls
|
||||
unittest2
|
||||
mock
|
||||
31
run_tests
Executable file
31
run_tests
Executable file
@ -0,0 +1,31 @@
|
||||
#!/bin/bash
|
||||
|
||||
|
||||
if [ "$USER" != "root" ]; then
|
||||
echo "Need root to do proper zfs testing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
#reactivate python environment, if any (usefull in Travis)
|
||||
[ "$VIRTUAL_ENV" ] && source $VIRTUAL_ENV/bin/activate
|
||||
|
||||
# test needs ssh access to localhost for testing
|
||||
if ! [ -e /root/.ssh/id_rsa ]; then
|
||||
ssh-keygen -t rsa -f /root/.ssh/id_rsa -P '' || exit 1
|
||||
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys || exit 1
|
||||
ssh -oStrictHostKeyChecking=no localhost true || exit 1
|
||||
fi
|
||||
|
||||
coverage run --source bin.zfs_autobackup -m unittest discover -vv
|
||||
EXIT=$?
|
||||
|
||||
echo
|
||||
coverage report
|
||||
|
||||
#this does automatic travis CI/https://coveralls.io/ intergration:
|
||||
if which coveralls > /dev/null; then
|
||||
echo "Submitting to coveralls.io:"
|
||||
coveralls
|
||||
fi
|
||||
|
||||
exit $EXIT
|
||||
135
test_destroymissing.py
Normal file
135
test_destroymissing.py
Normal file
@ -0,0 +1,135 @@
|
||||
|
||||
from basetest import *
|
||||
|
||||
|
||||
class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
prepare_zpools()
|
||||
self.longMessage=True
|
||||
|
||||
|
||||
|
||||
def test_destroymissing(self):
|
||||
|
||||
#initial backup
|
||||
with patch('time.strftime', return_value="10101111000000"): #1000 years in past
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"): #far in past
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --allow-empty".split(" ")).run())
|
||||
|
||||
|
||||
with self.subTest("Should do nothing yet"):
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
self.assertNotIn(": Destroy missing", buf.getvalue())
|
||||
|
||||
|
||||
with self.subTest("missing dataset of us that still has children"):
|
||||
|
||||
#just deselect it so it counts as 'missing'
|
||||
shelltest("zfs set autobackup:test=child test_source1/fs1")
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf), redirect_stderr(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#should have done the snapshot cleanup for destoy missing:
|
||||
self.assertIn("fs1@test-10101111000000: Destroying", buf.getvalue())
|
||||
|
||||
self.assertIn("fs1: Destroy missing: Still has children here.", buf.getvalue())
|
||||
|
||||
shelltest("zfs inherit autobackup:test test_source1/fs1")
|
||||
|
||||
|
||||
with self.subTest("Normal destroyed leaf"):
|
||||
shelltest("zfs destroy -r test_source1/fs1/sub")
|
||||
|
||||
#wait for deadline of last snapshot
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
#100y: lastest should not be old enough, while second to latest snapshot IS old enough:
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 100y".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
self.assertIn(": Waiting for deadline", buf.getvalue())
|
||||
|
||||
#past deadline, destroy
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 1y".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
self.assertIn("sub: Destroying", buf.getvalue())
|
||||
|
||||
|
||||
with self.subTest("Leaf with other snapshot still using it"):
|
||||
shelltest("zfs destroy -r test_source1/fs1")
|
||||
shelltest("zfs snapshot -r test_target1/test_source1/fs1@other1")
|
||||
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
#cant finish because still in use:
|
||||
self.assertIn("fs1: Destroy missing: Still in use", buf.getvalue())
|
||||
|
||||
shelltest("zfs destroy test_target1/test_source1/fs1@other1")
|
||||
|
||||
|
||||
with self.subTest("In use by clone"):
|
||||
shelltest("zfs clone test_target1/test_source1/fs1@test-20101111000000 test_target1/clone1")
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf), redirect_stderr(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#now tries to destroy our own last snapshot (before the final destroy of the dataset)
|
||||
self.assertIn("fs1@test-20101111000000: Destroying", buf.getvalue())
|
||||
#but cant finish because still in use:
|
||||
self.assertIn("fs1: Error during destoy missing", buf.getvalue())
|
||||
|
||||
shelltest("zfs destroy test_target1/clone1")
|
||||
|
||||
|
||||
with self.subTest("Should leave test_source1 parent"):
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf), redirect_stderr(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#should have done the snapshot cleanup for destoy missing:
|
||||
self.assertIn("fs1: Destroying", buf.getvalue())
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf), redirect_stderr(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#on second run it sees the dangling ex-parent but doesnt know what to do with it (since it has no own snapshot)
|
||||
self.assertIn("test_source1: Destroy missing: has no snapshots made by us.", buf.getvalue())
|
||||
|
||||
|
||||
|
||||
|
||||
#end result
|
||||
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-10101111000000
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
135
test_executenode.py
Normal file
135
test_executenode.py
Normal file
@ -0,0 +1,135 @@
|
||||
from basetest import *
|
||||
|
||||
|
||||
print("THIS TEST REQUIRES SSH TO LOCALHOST")
|
||||
|
||||
class TestExecuteNode(unittest2.TestCase):
|
||||
|
||||
# def setUp(self):
|
||||
|
||||
# return super().setUp()
|
||||
|
||||
def basics(self, node ):
|
||||
|
||||
with self.subTest("simple echo"):
|
||||
self.assertEqual(node.run(["echo","test"]), ["test"])
|
||||
|
||||
with self.subTest("error exit code"):
|
||||
with self.assertRaises(subprocess.CalledProcessError):
|
||||
node.run(["false"])
|
||||
|
||||
#
|
||||
with self.subTest("multiline without tabsplit"):
|
||||
self.assertEqual(node.run(["echo","l1c1\tl1c2\nl2c1\tl2c2"], tab_split=False), ["l1c1\tl1c2", "l2c1\tl2c2"])
|
||||
|
||||
#multiline tabsplit
|
||||
with self.subTest("multiline tabsplit"):
|
||||
self.assertEqual(node.run(["echo","l1c1\tl1c2\nl2c1\tl2c2"], tab_split=True), [['l1c1', 'l1c2'], ['l2c1', 'l2c2']])
|
||||
|
||||
#escaping test (shouldnt be a problem locally, single quotes can be a problem remote via ssh)
|
||||
with self.subTest("escape test"):
|
||||
s="><`'\"@&$()$bla\\/.*!#test _+-={}[]|"
|
||||
self.assertEqual(node.run(["echo",s]), [s])
|
||||
|
||||
#return std err as well, trigger stderr by listing something non existing
|
||||
with self.subTest("stderr return"):
|
||||
(stdout, stderr)=node.run(["ls", "nonexistingfile"], return_stderr=True, valid_exitcodes=[2])
|
||||
self.assertEqual(stdout,[])
|
||||
self.assertRegex(stderr[0],"nonexistingfile")
|
||||
|
||||
#slow command, make sure things dont exit too early
|
||||
with self.subTest("early exit test"):
|
||||
start_time=time.time()
|
||||
self.assertEqual(node.run(["sleep","1"]), [])
|
||||
self.assertGreaterEqual(time.time()-start_time,1)
|
||||
|
||||
#input a string and check it via cat
|
||||
with self.subTest("stdin input string"):
|
||||
self.assertEqual(node.run(["cat"], input="test"), ["test"])
|
||||
|
||||
#command that wants input, while we dont have input, shouldnt hang forever.
|
||||
with self.subTest("stdin process with input=None (shouldn't hang)"):
|
||||
self.assertEqual(node.run(["cat"]), [])
|
||||
|
||||
def test_basics_local(self):
|
||||
node=ExecuteNode(debug_output=True)
|
||||
self.basics(node)
|
||||
|
||||
def test_basics_remote(self):
|
||||
node=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||
self.basics(node)
|
||||
|
||||
################
|
||||
|
||||
def test_readonly(self):
|
||||
node=ExecuteNode(debug_output=True, readonly=True)
|
||||
|
||||
self.assertEqual(node.run(["echo","test"], readonly=False), None)
|
||||
self.assertEqual(node.run(["echo","test"], readonly=True), ["test"])
|
||||
|
||||
|
||||
################
|
||||
|
||||
def pipe(self, nodea, nodeb):
|
||||
|
||||
with self.subTest("pipe data"):
|
||||
output=nodea.run(["dd", "if=/dev/zero", "count=1000"], pipe=True)
|
||||
self.assertEqual(nodeb.run(["md5sum"], input=output), ["816df6f64deba63b029ca19d880ee10a -"])
|
||||
|
||||
with self.subTest("exit code both ends of pipe ok"):
|
||||
output=nodea.run(["true"], pipe=True)
|
||||
nodeb.run(["true"], input=output)
|
||||
|
||||
with self.subTest("error on pipe input side"):
|
||||
with self.assertRaises(subprocess.CalledProcessError):
|
||||
output=nodea.run(["false"], pipe=True)
|
||||
nodeb.run(["true"], input=output)
|
||||
|
||||
with self.subTest("error on pipe output side "):
|
||||
with self.assertRaises(subprocess.CalledProcessError):
|
||||
output=nodea.run(["true"], pipe=True)
|
||||
nodeb.run(["false"], input=output)
|
||||
|
||||
with self.subTest("error on both sides of pipe"):
|
||||
with self.assertRaises(subprocess.CalledProcessError):
|
||||
output=nodea.run(["false"], pipe=True)
|
||||
nodeb.run(["false"], input=output)
|
||||
|
||||
with self.subTest("check stderr on pipe output side"):
|
||||
output=nodea.run(["true"], pipe=True)
|
||||
(stdout, stderr)=nodeb.run(["ls", "nonexistingfile"], input=output, return_stderr=True, valid_exitcodes=[0,2])
|
||||
self.assertEqual(stdout,[])
|
||||
self.assertRegex(stderr[0], "nonexistingfile" )
|
||||
|
||||
with self.subTest("check stderr on pipe input side (should be only printed)"):
|
||||
output=nodea.run(["ls", "nonexistingfile"], pipe=True)
|
||||
(stdout, stderr)=nodeb.run(["true"], input=output, return_stderr=True, valid_exitcodes=[0,2])
|
||||
self.assertEqual(stdout,[])
|
||||
self.assertEqual(stderr,[] )
|
||||
|
||||
|
||||
|
||||
|
||||
def test_pipe_local_local(self):
|
||||
nodea=ExecuteNode(debug_output=True)
|
||||
nodeb=ExecuteNode(debug_output=True)
|
||||
self.pipe(nodea, nodeb)
|
||||
|
||||
def test_pipe_remote_remote(self):
|
||||
nodea=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||
nodeb=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||
self.pipe(nodea, nodeb)
|
||||
|
||||
def test_pipe_local_remote(self):
|
||||
nodea=ExecuteNode(debug_output=True)
|
||||
nodeb=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||
self.pipe(nodea, nodeb)
|
||||
|
||||
def test_pipe_remote_local(self):
|
||||
nodea=ExecuteNode(ssh_to="localhost", debug_output=True)
|
||||
nodeb=ExecuteNode(debug_output=True)
|
||||
self.pipe(nodea, nodeb)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
275
test_externalfailures.py
Normal file
275
test_externalfailures.py
Normal file
@ -0,0 +1,275 @@
|
||||
|
||||
from basetest import *
|
||||
|
||||
|
||||
class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
prepare_zpools()
|
||||
self.longMessage=True
|
||||
|
||||
# generate a resumable state
|
||||
#NOTE: this generates two resumable test_target1/test_source1/fs1 and test_target1/test_source1/fs1/sub
|
||||
def generate_resume(self):
|
||||
|
||||
r=shelltest("zfs set compress=off test_source1 test_target1")
|
||||
|
||||
#big change on source
|
||||
r=shelltest("dd if=/dev/zero of=/test_source1/fs1/data bs=250M count=1")
|
||||
|
||||
#waste space on target
|
||||
r=shelltest("dd if=/dev/zero of=/test_target1/waste bs=250M count=1")
|
||||
|
||||
#should fail and leave resume token (if supported)
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
#free up space
|
||||
r=shelltest("rm /test_target1/waste")
|
||||
#sync
|
||||
r=shelltest("zfs umount test_target1")
|
||||
r=shelltest("zfs mount test_target1")
|
||||
|
||||
|
||||
#resume initial backup
|
||||
def test_initial_resume(self):
|
||||
|
||||
#inital backup, leaves resume token
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.generate_resume()
|
||||
|
||||
#--test should resume and succeed
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
#did we really resume?
|
||||
if "0.6.5" in ZFS_USERSPACE:
|
||||
#abort this late, for beter coverage
|
||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||
else:
|
||||
self.assertIn(": resuming", buf.getvalue())
|
||||
|
||||
|
||||
#should resume and succeed
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
#did we really resume?
|
||||
if "0.6.5" in ZFS_USERSPACE:
|
||||
#abort this late, for beter coverage
|
||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||
else:
|
||||
self.assertIn(": resuming", buf.getvalue())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
|
||||
#resume incremental backup
|
||||
def test_incremental_resume(self):
|
||||
|
||||
#initial backup
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
#incremental backup leaves resume token
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.generate_resume()
|
||||
|
||||
#--test should resume and succeed
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
#did we really resume?
|
||||
if "0.6.5" in ZFS_USERSPACE:
|
||||
#abort this late, for beter coverage
|
||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||
else:
|
||||
self.assertIn(": resuming", buf.getvalue())
|
||||
|
||||
#should resume and succeed
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
#did we really resume?
|
||||
if "0.6.5" in ZFS_USERSPACE:
|
||||
#abort this late, for beter coverage
|
||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||
else:
|
||||
self.assertIn(": resuming", buf.getvalue())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
|
||||
# generate an invalid resume token, and verify if its aborted automaticly
|
||||
def test_initial_resumeabort(self):
|
||||
|
||||
if "0.6.5" in ZFS_USERSPACE:
|
||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||
|
||||
#inital backup, leaves resume token
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.generate_resume()
|
||||
|
||||
#remove corresponding source snapshot, so it becomes invalid
|
||||
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
||||
|
||||
#NOTE: it can only abort the initial dataset if it has no subs
|
||||
shelltest("zfs destroy test_target1/test_source1/fs1/sub; true")
|
||||
|
||||
#--test try again, should abort old resume
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
|
||||
#try again, should abort old resume
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
|
||||
# generate an invalid resume token, and verify if its aborted automaticly
|
||||
def test_incremental_resumeabort(self):
|
||||
|
||||
if "0.6.5" in ZFS_USERSPACE:
|
||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||
|
||||
#initial backup
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
#icremental backup, leaves resume token
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.generate_resume()
|
||||
|
||||
#remove corresponding source snapshot, so it becomes invalid
|
||||
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
|
||||
|
||||
#--test try again, should abort old resume
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
|
||||
#try again, should abort old resume
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@test-20101111000002
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
|
||||
#create a resume situation, where the other side doesnt want the snapshot anymore ( should abort resume )
|
||||
def test_abort_unwanted_resume(self):
|
||||
|
||||
if "0.6.5" in ZFS_USERSPACE:
|
||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
#generate resume
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.generate_resume()
|
||||
|
||||
with OutputIO() as buf:
|
||||
with redirect_stdout(buf):
|
||||
#incremental, doesnt want previous anymore
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --keep-target=0 --debug --allow-empty".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
|
||||
self.assertIn(": aborting resume, since", buf.getvalue())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000002
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000002
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000002
|
||||
""")
|
||||
|
||||
|
||||
def test_missing_common(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
#remove common snapshot and leave nothing
|
||||
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
|
||||
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
|
||||
|
||||
|
||||
############# TODO:
|
||||
def test_ignoretransfererrors(self):
|
||||
|
||||
self.skipTest("todo: create some kind of situation where zfs recv exits with an error but transfer is still ok (happens in practice with acltype)")
|
||||
20
test_regressions.py
Normal file
20
test_regressions.py
Normal file
@ -0,0 +1,20 @@
|
||||
|
||||
from basetest import *
|
||||
|
||||
|
||||
class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
prepare_zpools()
|
||||
self.longMessage=True
|
||||
|
||||
# #resume initial backup
|
||||
# def test_keepsource0(self):
|
||||
|
||||
# #somehow only specifying --allow-empty --keep-source 0 failed:
|
||||
# with patch('time.strftime', return_value="20101111000000"):
|
||||
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run())
|
||||
|
||||
# with patch('time.strftime', return_value="20101111000001"):
|
||||
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run())
|
||||
|
||||
139
test_thinner.py
Normal file
139
test_thinner.py
Normal file
@ -0,0 +1,139 @@
|
||||
from basetest import *
|
||||
|
||||
#randint is different in python 2 vs 3
|
||||
randint_compat = lambda lo, hi: lo + int(random.random() * (hi + 1 - lo))
|
||||
|
||||
|
||||
class Thing:
|
||||
def __init__(self, timestamp):
|
||||
self.timestamp=timestamp
|
||||
|
||||
def __str__(self):
|
||||
# age=now-self.timestamp
|
||||
struct=time.gmtime(self.timestamp)
|
||||
return("{}".format(time.strftime("%Y-%m-%d %H:%M:%S",struct)))
|
||||
|
||||
|
||||
class TestThinner(unittest2.TestCase):
|
||||
|
||||
# def setUp(self):
|
||||
|
||||
# return super().setUp()
|
||||
|
||||
def test_incremental(self):
|
||||
ok=['2023-01-03 10:53:16',
|
||||
'2024-01-02 15:43:29',
|
||||
'2025-01-01 06:15:32',
|
||||
'2026-01-01 02:48:23',
|
||||
'2026-04-07 20:07:36',
|
||||
'2026-05-07 02:30:29',
|
||||
'2026-06-06 01:19:46',
|
||||
'2026-07-06 06:38:09',
|
||||
'2026-08-05 05:08:53',
|
||||
'2026-09-04 03:33:04',
|
||||
'2026-10-04 05:27:09',
|
||||
'2026-11-04 04:01:17',
|
||||
'2026-12-03 13:49:56',
|
||||
'2027-01-01 17:02:00',
|
||||
'2027-01-03 04:26:42',
|
||||
'2027-02-01 14:16:02',
|
||||
'2027-02-12 03:31:02',
|
||||
'2027-02-18 00:33:10',
|
||||
'2027-02-26 21:09:54',
|
||||
'2027-03-02 08:05:18',
|
||||
'2027-03-03 16:46:09',
|
||||
'2027-03-04 06:39:14',
|
||||
'2027-03-06 03:35:41',
|
||||
'2027-03-08 12:24:42',
|
||||
'2027-03-08 20:34:57']
|
||||
|
||||
|
||||
|
||||
|
||||
#some arbitrary date
|
||||
now=1589229252
|
||||
#we want deterministic results
|
||||
random.seed(1337)
|
||||
thinner=Thinner("5,10s1min,1d1w,1w1m,1m12m,1y5y")
|
||||
things=[]
|
||||
|
||||
#thin incrementally while adding
|
||||
for i in range(0,5000):
|
||||
|
||||
#increase random amount of time and maybe add a thing
|
||||
now=now+randint_compat(0,3600*24)
|
||||
if random.random()>=0.5:
|
||||
things.append(Thing(now))
|
||||
|
||||
(keeps, removes)=thinner.thin(things, now=now)
|
||||
things=keeps
|
||||
|
||||
|
||||
result=[]
|
||||
for thing in things:
|
||||
result.append(str(thing))
|
||||
|
||||
print("Thinner result incremental:")
|
||||
pprint.pprint(result)
|
||||
|
||||
self.assertEqual(result, ok)
|
||||
|
||||
|
||||
def test_full(self):
|
||||
ok=['2022-03-09 01:56:23',
|
||||
'2023-01-03 10:53:16',
|
||||
'2024-01-02 15:43:29',
|
||||
'2025-01-01 06:15:32',
|
||||
'2026-01-01 02:48:23',
|
||||
'2026-03-14 09:08:04',
|
||||
'2026-04-07 20:07:36',
|
||||
'2026-05-07 02:30:29',
|
||||
'2026-06-06 01:19:46',
|
||||
'2026-07-06 06:38:09',
|
||||
'2026-08-05 05:08:53',
|
||||
'2026-09-04 03:33:04',
|
||||
'2026-10-04 05:27:09',
|
||||
'2026-11-04 04:01:17',
|
||||
'2026-12-03 13:49:56',
|
||||
'2027-01-01 17:02:00',
|
||||
'2027-01-03 04:26:42',
|
||||
'2027-02-01 14:16:02',
|
||||
'2027-02-08 02:41:14',
|
||||
'2027-02-12 03:31:02',
|
||||
'2027-02-18 00:33:10',
|
||||
'2027-02-26 21:09:54',
|
||||
'2027-03-02 08:05:18',
|
||||
'2027-03-03 16:46:09',
|
||||
'2027-03-04 06:39:14',
|
||||
'2027-03-06 03:35:41',
|
||||
'2027-03-08 12:24:42',
|
||||
'2027-03-08 20:34:57']
|
||||
|
||||
#some arbitrary date
|
||||
now=1589229252
|
||||
#we want deterministic results
|
||||
random.seed(1337)
|
||||
thinner=Thinner("5,10s1min,1d1w,1w1m,1m12m,1y5y")
|
||||
things=[]
|
||||
|
||||
for i in range(0,5000):
|
||||
|
||||
#increase random amount of time and maybe add a thing
|
||||
now=now+randint_compat(0,3600*24)
|
||||
if random.random()>=0.5:
|
||||
things.append(Thing(now))
|
||||
|
||||
(things, removes)=thinner.thin(things, now=now)
|
||||
|
||||
result=[]
|
||||
for thing in things:
|
||||
result.append(str(thing))
|
||||
|
||||
print("Thinner result full:")
|
||||
pprint.pprint(result)
|
||||
|
||||
self.assertEqual(result, ok)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
847
test_zfsautobackup.py
Normal file
847
test_zfsautobackup.py
Normal file
@ -0,0 +1,847 @@
|
||||
from basetest import *
|
||||
import time
|
||||
|
||||
|
||||
|
||||
class TestZfsAutobackup(unittest2.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
prepare_zpools()
|
||||
self.longMessage=True
|
||||
|
||||
def test_invalidpars(self):
|
||||
|
||||
self.assertEqual(ZfsAutobackup("test test_target1 --keep-source -1".split(" ")).run(), 255)
|
||||
|
||||
def test_snapshotmode(self):
|
||||
"""test snapshot tool mode"""
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test --verbose --allow-empty --keep-source 0".split(" ")).run())
|
||||
|
||||
#on source: only has 1 and 2 (1 was hold)
|
||||
#on target: has 0 and 1
|
||||
#XXX:
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000001
|
||||
test_source1/fs1@test-20101111000002
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000001
|
||||
test_source1/fs1/sub@test-20101111000002
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000001
|
||||
test_source2/fs2/sub@test-20101111000002
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||
""")
|
||||
|
||||
|
||||
|
||||
def test_defaults(self):
|
||||
|
||||
with self.subTest("no datasets selected"):
|
||||
with OutputIO() as buf:
|
||||
with redirect_stderr(buf):
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug".split(" ")).run())
|
||||
|
||||
print(buf.getvalue())
|
||||
#correct message?
|
||||
self.assertIn("No source filesystems selected", buf.getvalue())
|
||||
|
||||
|
||||
with self.subTest("defaults with full verbose and debug"):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
with self.subTest("bare defaults, allow empty"):
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty".split(" ")).run())
|
||||
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1@test-20101111000001
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source1/fs1/sub@test-20101111000001
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs2/sub@test-20101111000001
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||
""")
|
||||
|
||||
with self.subTest("verify holds"):
|
||||
|
||||
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
test_source1 userrefs - -
|
||||
test_source1/fs1 userrefs - -
|
||||
test_source1/fs1@test-20101111000000 userrefs 0 -
|
||||
test_source1/fs1@test-20101111000001 userrefs 1 -
|
||||
test_source1/fs1/sub userrefs - -
|
||||
test_source1/fs1/sub@test-20101111000000 userrefs 0 -
|
||||
test_source1/fs1/sub@test-20101111000001 userrefs 1 -
|
||||
test_source2 userrefs - -
|
||||
test_source2/fs2 userrefs - -
|
||||
test_source2/fs2/sub userrefs - -
|
||||
test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
||||
test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
||||
test_source2/fs3 userrefs - -
|
||||
test_source2/fs3/sub userrefs - -
|
||||
test_target1 userrefs - -
|
||||
test_target1/test_source1 userrefs - -
|
||||
test_target1/test_source1/fs1 userrefs - -
|
||||
test_target1/test_source1/fs1@test-20101111000000 userrefs 0 -
|
||||
test_target1/test_source1/fs1@test-20101111000001 userrefs 1 -
|
||||
test_target1/test_source1/fs1/sub userrefs - -
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000 userrefs 0 -
|
||||
test_target1/test_source1/fs1/sub@test-20101111000001 userrefs 1 -
|
||||
test_target1/test_source2 userrefs - -
|
||||
test_target1/test_source2/fs2 userrefs - -
|
||||
test_target1/test_source2/fs2/sub userrefs - -
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
||||
test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
||||
""")
|
||||
|
||||
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
|
||||
with self.subTest("test time checking"):
|
||||
with patch('time.strftime', return_value="20111111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose".split(" ")).run())
|
||||
|
||||
|
||||
time_str="20111112000000" #month in the "future"
|
||||
future_timestamp=time_secs=time.mktime(time.strptime(time_str,"%Y%m%d%H%M%S"))
|
||||
with patch('time.time', return_value=future_timestamp):
|
||||
with patch('time.strftime', return_value="20111111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y".split(" ")).run())
|
||||
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20111111000000
|
||||
test_source1/fs1@test-20111111000001
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20111111000000
|
||||
test_source1/fs1/sub@test-20111111000001
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20111111000000
|
||||
test_source2/fs2/sub@test-20111111000001
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20111111000000
|
||||
test_target1/test_source1/fs1@test-20111111000001
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20111111000000
|
||||
test_target1/test_source1/fs1/sub@test-20111111000001
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20111111000000
|
||||
test_target1/test_source2/fs2/sub@test-20111111000001
|
||||
""")
|
||||
|
||||
|
||||
def test_ignore_othersnaphots(self):
|
||||
|
||||
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
||||
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@othersimple
|
||||
test_source1/fs1@otherdate-20001111000000
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
def test_othersnaphots(self):
|
||||
|
||||
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
||||
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --other-snapshots".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@othersimple
|
||||
test_source1/fs1@otherdate-20001111000000
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@othersimple
|
||||
test_target1/test_source1/fs1@otherdate-20001111000000
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
|
||||
def test_nosnapshot(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
#(only parents are created )
|
||||
#TODO: it probably shouldn't create these
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1/sub
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
""")
|
||||
|
||||
|
||||
def test_nosend(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
#(only parents are created )
|
||||
#TODO: it probably shouldn't create these
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
""")
|
||||
|
||||
|
||||
def test_ignorereplicated(self):
|
||||
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --ignore-replicated".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
#(only parents are created )
|
||||
#TODO: it probably shouldn't create these
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@otherreplication
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
def test_noholds(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
test_source1 userrefs - -
|
||||
test_source1/fs1 userrefs - -
|
||||
test_source1/fs1@test-20101111000000 userrefs 0 -
|
||||
test_source1/fs1/sub userrefs - -
|
||||
test_source1/fs1/sub@test-20101111000000 userrefs 0 -
|
||||
test_source2 userrefs - -
|
||||
test_source2/fs2 userrefs - -
|
||||
test_source2/fs2/sub userrefs - -
|
||||
test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
||||
test_source2/fs3 userrefs - -
|
||||
test_source2/fs3/sub userrefs - -
|
||||
test_target1 userrefs - -
|
||||
test_target1/test_source1 userrefs - -
|
||||
test_target1/test_source1/fs1 userrefs - -
|
||||
test_target1/test_source1/fs1@test-20101111000000 userrefs 1 -
|
||||
test_target1/test_source1/fs1/sub userrefs - -
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000 userrefs 1 -
|
||||
test_target1/test_source2 userrefs - -
|
||||
test_target1/test_source2/fs2 userrefs - -
|
||||
test_target1/test_source2/fs2/sub userrefs - -
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 1 -
|
||||
""")
|
||||
|
||||
|
||||
def test_strippath(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/fs1
|
||||
test_target1/fs1@test-20101111000000
|
||||
test_target1/fs1/sub
|
||||
test_target1/fs1/sub@test-20101111000000
|
||||
test_target1/fs2
|
||||
test_target1/fs2/sub
|
||||
test_target1/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
|
||||
def test_clearrefres(self):
|
||||
|
||||
#on zfs utils 0.6.x -x isnt supported
|
||||
r=shelltest("zfs recv -x bla test >/dev/null </dev/zero; echo $?")
|
||||
if r=="\n2\n":
|
||||
self.skipTest("This zfs-userspace version doesnt support -x")
|
||||
|
||||
r=shelltest("zfs set refreservation=1M test_source1/fs1")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --clear-refreservation".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs get refreservation -r test_source1 test_source2 test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
test_source1 refreservation none default
|
||||
test_source1/fs1 refreservation 1M local
|
||||
test_source1/fs1@test-20101111000000 refreservation - -
|
||||
test_source1/fs1/sub refreservation none default
|
||||
test_source1/fs1/sub@test-20101111000000 refreservation - -
|
||||
test_source2 refreservation none default
|
||||
test_source2/fs2 refreservation none default
|
||||
test_source2/fs2/sub refreservation none default
|
||||
test_source2/fs2/sub@test-20101111000000 refreservation - -
|
||||
test_source2/fs3 refreservation none default
|
||||
test_source2/fs3/sub refreservation none default
|
||||
test_target1 refreservation none default
|
||||
test_target1/test_source1 refreservation none default
|
||||
test_target1/test_source1/fs1 refreservation none default
|
||||
test_target1/test_source1/fs1@test-20101111000000 refreservation - -
|
||||
test_target1/test_source1/fs1/sub refreservation none default
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000 refreservation - -
|
||||
test_target1/test_source2 refreservation none default
|
||||
test_target1/test_source2/fs2 refreservation none default
|
||||
test_target1/test_source2/fs2/sub refreservation none default
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000 refreservation - -
|
||||
""")
|
||||
|
||||
|
||||
def test_clearmount(self):
|
||||
|
||||
#on zfs utils 0.6.x -o isnt supported
|
||||
r=shelltest("zfs recv -o bla=1 test >/dev/null </dev/zero; echo $?")
|
||||
if r=="\n2\n":
|
||||
self.skipTest("This zfs-userspace version doesnt support -o")
|
||||
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --clear-mountpoint".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs get canmount -r test_source1 test_source2 test_target1")
|
||||
self.assertMultiLineEqual(r,"""
|
||||
NAME PROPERTY VALUE SOURCE
|
||||
test_source1 canmount on default
|
||||
test_source1/fs1 canmount on default
|
||||
test_source1/fs1@test-20101111000000 canmount - -
|
||||
test_source1/fs1/sub canmount on default
|
||||
test_source1/fs1/sub@test-20101111000000 canmount - -
|
||||
test_source2 canmount on default
|
||||
test_source2/fs2 canmount on default
|
||||
test_source2/fs2/sub canmount on default
|
||||
test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||
test_source2/fs3 canmount on default
|
||||
test_source2/fs3/sub canmount on default
|
||||
test_target1 canmount on default
|
||||
test_target1/test_source1 canmount on default
|
||||
test_target1/test_source1/fs1 canmount noauto local
|
||||
test_target1/test_source1/fs1@test-20101111000000 canmount - -
|
||||
test_target1/test_source1/fs1/sub canmount noauto local
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000 canmount - -
|
||||
test_target1/test_source2 canmount on default
|
||||
test_target1/test_source2/fs2 canmount on default
|
||||
test_target1/test_source2/fs2/sub canmount noauto local
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||
""")
|
||||
|
||||
|
||||
def test_rollback(self):
|
||||
|
||||
#initial backup
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
#make change
|
||||
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
||||
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
#should fail (busy)
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
#rollback, should succeed
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --rollback".split(" ")).run())
|
||||
|
||||
|
||||
def test_destroyincompat(self):
|
||||
|
||||
#initial backup
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
#add multiple compatible snapshot (written is still 0)
|
||||
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
|
||||
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible2")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
#should be ok, is compatible
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
#add incompatible snapshot by changing and snapshotting
|
||||
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
||||
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
|
||||
r=shelltest("zfs snapshot test_target1/test_source1/fs1@incompatible1")
|
||||
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
#--test should fail, now incompatible
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty --test".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
#should fail, now incompatible
|
||||
self.assertTrue(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000003"):
|
||||
#--test should succeed by destroying incompatibles
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000003"):
|
||||
#should succeed by destroying incompatibles
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --destroy-incompatible".split(" ")).run())
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def test_keepsourcetarget(self):
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty".split(" ")).run())
|
||||
|
||||
#should still have all snapshots
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1@test-20101111000001
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source1/fs1/sub@test-20101111000001
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs2/sub@test-20101111000001
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||
""")
|
||||
|
||||
|
||||
#run again with keep=0
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source=0 --keep-target=0".split(" ")).run())
|
||||
|
||||
#should only have last snapshots
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000002
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000002
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000002
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000002
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000002
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000002
|
||||
""")
|
||||
|
||||
|
||||
def test_ssh(self):
|
||||
|
||||
#test all ssh directions
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-source localhost".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-target localhost".split(" ")).run())
|
||||
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
|
||||
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1@test-20101111000001
|
||||
test_source1/fs1@test-20101111000002
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source1/fs1/sub@test-20101111000001
|
||||
test_source1/fs1/sub@test-20101111000002
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs2/sub@test-20101111000001
|
||||
test_source2/fs2/sub@test-20101111000002
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1@test-20101111000002
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub@test-20101111000002
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||
test_target1/test_source2/fs2/sub@test-20101111000002
|
||||
""")
|
||||
|
||||
|
||||
def test_minchange(self):
|
||||
|
||||
#initial
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||
|
||||
#make small change, use umount to reflect the changes immediately
|
||||
r=shelltest("zfs set compress=off test_source1")
|
||||
r=shelltest("touch /test_source1/fs1/change.txt")
|
||||
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
|
||||
|
||||
|
||||
#too small change, takes no snapshots
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||
|
||||
#make big change
|
||||
r=shelltest("dd if=/dev/zero of=/test_source1/fs1/change.txt bs=200000 count=1")
|
||||
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
|
||||
|
||||
#bigger change, should take snapshot
|
||||
with patch('time.strftime', return_value="20101111000002"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --min-change 100000".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1@test-20101111000002
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1@test-20101111000002
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
def test_test(self):
|
||||
|
||||
#initial
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1/sub
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
""")
|
||||
|
||||
#actual make initial backup
|
||||
with patch('time.strftime', return_value="20101111000001"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
|
||||
#test incremental
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --test".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-20101111000001
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000001
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000001
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@test-20101111000001
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||
""")
|
||||
|
||||
|
||||
def test_migrate(self):
|
||||
"""test migration from other snapshotting systems. zfs-autobackup should be able to continue from any common snapshot, not just its own."""
|
||||
|
||||
shelltest("zfs snapshot test_source1/fs1@migrate1")
|
||||
shelltest("zfs create test_target1/test_source1")
|
||||
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
|
||||
|
||||
with patch('time.strftime', return_value="20101111000000"):
|
||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertMultiLineEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@migrate1
|
||||
test_source1/fs1@test-20101111000000
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-20101111000000
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-20101111000000
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
test_target1/test_source1
|
||||
test_target1/test_source1/fs1
|
||||
test_target1/test_source1/fs1@migrate1
|
||||
test_target1/test_source1/fs1@test-20101111000000
|
||||
test_target1/test_source1/fs1/sub
|
||||
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||
test_target1/test_source2
|
||||
test_target1/test_source2/fs2
|
||||
test_target1/test_source2/fs2/sub
|
||||
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||
""")
|
||||
|
||||
|
||||
###########################
|
||||
# TODO:
|
||||
|
||||
def test_raw(self):
|
||||
|
||||
self.skipTest("todo: later when travis supports zfs 0.8")
|
||||
|
||||
|
||||
|
||||
123
test_zfsnode.py
Normal file
123
test_zfsnode.py
Normal file
@ -0,0 +1,123 @@
|
||||
from basetest import *
|
||||
|
||||
|
||||
class TestZfsNode(unittest2.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
prepare_zpools()
|
||||
# return super().setUp()
|
||||
|
||||
|
||||
def test_consistent_snapshot(self):
|
||||
logger=Logger()
|
||||
description="[Source]"
|
||||
node=ZfsNode("test", logger, description=description)
|
||||
|
||||
with self.subTest("first snapshot"):
|
||||
node.consistent_snapshot(node.selected_datasets, "test-1",100000)
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-1
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-1
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-1
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
""")
|
||||
|
||||
|
||||
with self.subTest("second snapshot, no changes, no snapshot"):
|
||||
node.consistent_snapshot(node.selected_datasets, "test-2",1)
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-1
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-1
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-1
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
""")
|
||||
|
||||
with self.subTest("second snapshot, no changes, empty snapshot"):
|
||||
node.consistent_snapshot(node.selected_datasets, "test-2",0)
|
||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||
self.assertEqual(r,"""
|
||||
test_source1
|
||||
test_source1/fs1
|
||||
test_source1/fs1@test-1
|
||||
test_source1/fs1@test-2
|
||||
test_source1/fs1/sub
|
||||
test_source1/fs1/sub@test-1
|
||||
test_source1/fs1/sub@test-2
|
||||
test_source2
|
||||
test_source2/fs2
|
||||
test_source2/fs2/sub
|
||||
test_source2/fs2/sub@test-1
|
||||
test_source2/fs2/sub@test-2
|
||||
test_source2/fs3
|
||||
test_source2/fs3/sub
|
||||
test_target1
|
||||
""")
|
||||
|
||||
|
||||
def test_getselected(self):
|
||||
logger=Logger()
|
||||
description="[Source]"
|
||||
node=ZfsNode("test", logger, description=description)
|
||||
s=pformat(node.selected_datasets)
|
||||
print(s)
|
||||
|
||||
#basics
|
||||
self.assertEqual (s, """[(local): test_source1/fs1,
|
||||
(local): test_source1/fs1/sub,
|
||||
(local): test_source2/fs2/sub]""")
|
||||
|
||||
#caching, so expect same result after changing it
|
||||
subprocess.check_call("zfs set autobackup:test=true test_source2/fs3", shell=True)
|
||||
self.assertEqual (s, """[(local): test_source1/fs1,
|
||||
(local): test_source1/fs1/sub,
|
||||
(local): test_source2/fs2/sub]""")
|
||||
|
||||
|
||||
def test_validcommand(self):
|
||||
logger=Logger()
|
||||
description="[Source]"
|
||||
node=ZfsNode("test", logger, description=description)
|
||||
|
||||
|
||||
with self.subTest("test invalid option"):
|
||||
self.assertFalse(node.valid_command(["zfs", "send", "--invalid-option", "nonexisting"]))
|
||||
with self.subTest("test valid option"):
|
||||
self.assertTrue(node.valid_command(["zfs", "send", "-v", "nonexisting"]))
|
||||
|
||||
def test_supportedsendoptions(self):
|
||||
logger=Logger()
|
||||
description="[Source]"
|
||||
node=ZfsNode("test", logger, description=description)
|
||||
# -D propably always supported
|
||||
self.assertGreater(len(node.supported_send_options),0)
|
||||
|
||||
|
||||
def test_supportedrecvoptions(self):
|
||||
logger=Logger()
|
||||
description="[Source]"
|
||||
#NOTE: this couldnt hang via ssh if we dont close filehandles properly. (which was a previous bug)
|
||||
node=ZfsNode("test", logger, description=description, ssh_to='localhost')
|
||||
self.assertIsInstance(node.supported_recv_options, list)
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
Reference in New Issue
Block a user