Compare commits
10 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| fd7015b77a | |||
| f524845dbb | |||
| 51c15ec618 | |||
| 9fe13a4207 | |||
| 7b8b536d53 | |||
| 122035dfef | |||
| 7b278be0b9 | |||
| cc1a9a3d72 | |||
| eaad31e8b4 | |||
| 470b4aaf55 |
185
README.md
185
README.md
@ -25,7 +25,9 @@
|
|||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
This is a tool I wrote to make replicating ZFS datasets easy and reliable. You can either use it as a backup tool or as a replication tool.
|
This is a tool I wrote to make replicating ZFS datasets easy and reliable.
|
||||||
|
|
||||||
|
You can either use it as a **backup** tool, **replication** tool or **snapshot** tool.
|
||||||
|
|
||||||
You can select what to backup by setting a custom `ZFS property`. This allows you to set and forget: Configure it so it backups your entire pool, and you never have to worry about backupping again. Even new datasets you create later will be backupped.
|
You can select what to backup by setting a custom `ZFS property`. This allows you to set and forget: Configure it so it backups your entire pool, and you never have to worry about backupping again. Even new datasets you create later will be backupped.
|
||||||
|
|
||||||
@ -35,13 +37,13 @@ Since its using ZFS commands, you can see what its actually doing by specifying
|
|||||||
|
|
||||||
An important feature thats missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your zfs datasets.
|
An important feature thats missing from other tools is a reliable `--test` option: This allows you to see what zfs-autobackup will do and tune your parameters. It will do everything, except make changes to your zfs datasets.
|
||||||
|
|
||||||
Another nice thing is progress reporting with `--progress`. Its very useful with HUGE datasets, when you want to know how many hours/days it will take.
|
Another nice thing is progress reporting: Its very useful with HUGE datasets, when you want to know how many hours/days it will take.
|
||||||
|
|
||||||
zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
* Works across operating systems: Tested with Linux, FreeBSD/FreeNAS and SmartOS.
|
* Works across operating systems: Tested with **Linux**, **FreeBSD/FreeNAS** and **SmartOS**.
|
||||||
* Works in combination with existing replication systems. (Like Proxmox HA)
|
* Works in combination with existing replication systems. (Like Proxmox HA)
|
||||||
* Automatically selects filesystems to backup by looking at a simple ZFS property. (recursive)
|
* Automatically selects filesystems to backup by looking at a simple ZFS property. (recursive)
|
||||||
* Creates consistent snapshots. (takes all snapshots at once, atomic.)
|
* Creates consistent snapshots. (takes all snapshots at once, atomic.)
|
||||||
@ -52,13 +54,16 @@ zfs-autobackup tries to be the easiest to use backup tool for zfs.
|
|||||||
* Or even pull data from a server while pushing the backup to another server.
|
* Or even pull data from a server while pushing the backup to another server.
|
||||||
* Can be scheduled via a simple cronjob or run directly from commandline.
|
* Can be scheduled via a simple cronjob or run directly from commandline.
|
||||||
* Supports resuming of interrupted transfers. (via the zfs extensible_dataset feature)
|
* Supports resuming of interrupted transfers. (via the zfs extensible_dataset feature)
|
||||||
* Backups and snapshots can be named to prevent conflicts. (multiple backups from and to the same filesystems are no problem)
|
* Backups and snapshots can be named to prevent conflicts. (multiple backups from and to the same datasets are no problem)
|
||||||
* Always creates a new snapshot before starting.
|
* Always creates a new snapshot before starting.
|
||||||
* Checks everything but tries continue on non-fatal errors when possible. (Reports error-count when done)
|
* Checks everything but tries continue on non-fatal errors when possible. (Reports error-count when done)
|
||||||
* Ability to 'finish' aborted backups to see what goes wrong.
|
* Ability to manually 'finish' failed backups to see whats going on.
|
||||||
* Easy to debug and has a test-mode. Actual unix commands are printed.
|
* Easy to debug and has a test-mode. Actual unix commands are printed.
|
||||||
* Keeps latest X snapshots remote and locally. (default 30, configurable)
|
* Uses **progressive thinning** for older snapshots.
|
||||||
* Uses zfs-holds on important snapshots so they cant be accidentally destroyed.
|
* Uses zfs-holds on important snapshots so they cant be accidentally destroyed.
|
||||||
|
* Automatic resuming of failed transfers.
|
||||||
|
* Can continue from existing common snapshots. (e.g. easy migration)
|
||||||
|
* Gracefully handles destroyed datasets on source.
|
||||||
* Easy installation:
|
* Easy installation:
|
||||||
* Just install zfs-autobackup via pip, or download it manually.
|
* Just install zfs-autobackup via pip, or download it manually.
|
||||||
* Written in python and uses zfs-commands, no 3rd party dependency's or libraries.
|
* Written in python and uses zfs-commands, no 3rd party dependency's or libraries.
|
||||||
@ -94,7 +99,7 @@ It should work with python 2.7 and higher.
|
|||||||
|
|
||||||
## Example
|
## Example
|
||||||
|
|
||||||
In this example we're going to backup a machine called `pve` to a machine called `backup`.
|
In this example we're going to backup a machine called `server1` to a machine called `backup`.
|
||||||
|
|
||||||
### Setup SSH login
|
### Setup SSH login
|
||||||
|
|
||||||
@ -102,7 +107,7 @@ zfs-autobackup needs passwordless login via ssh. This means generating an ssh ke
|
|||||||
|
|
||||||
#### Generate SSH key on `backup`
|
#### Generate SSH key on `backup`
|
||||||
|
|
||||||
On the server that runs zfs-autobackup you need to create an SSH key. You only need to do this once.
|
On the backup-server that runs zfs-autobackup you need to create an SSH key. You only need to do this once.
|
||||||
|
|
||||||
Use the `ssh-keygen` command and leave the passphrase empty:
|
Use the `ssh-keygen` command and leave the passphrase empty:
|
||||||
|
|
||||||
@ -131,14 +136,14 @@ The key's randomart image is:
|
|||||||
root@backup:~#
|
root@backup:~#
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Copy SSH key to `pve`
|
#### Copy SSH key to `server1`
|
||||||
|
|
||||||
Now you need to copy the public part of the key to `pve`
|
Now you need to copy the public part of the key to `server1`
|
||||||
|
|
||||||
The `ssh-copy-id` command is a handy tool to automate this. It will just ask for your password.
|
The `ssh-copy-id` command is a handy tool to automate this. It will just ask for your password.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
root@backup:~# ssh-copy-id root@pve.server.com
|
root@backup:~# ssh-copy-id root@server1.server.com
|
||||||
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
|
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
|
||||||
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
|
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
|
||||||
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
|
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
|
||||||
@ -146,11 +151,12 @@ Password:
|
|||||||
|
|
||||||
Number of key(s) added: 1
|
Number of key(s) added: 1
|
||||||
|
|
||||||
Now try logging into the machine, with: "ssh 'root@pve.server.com'"
|
Now try logging into the machine, with: "ssh 'root@server1.server.com'"
|
||||||
and check to make sure that only the key(s) you wanted were added.
|
and check to make sure that only the key(s) you wanted were added.
|
||||||
|
|
||||||
root@backup:~#
|
root@backup:~#
|
||||||
```
|
```
|
||||||
|
This allows the backup-server to login to `server1` as root without password.
|
||||||
|
|
||||||
### Select filesystems to backup
|
### Select filesystems to backup
|
||||||
|
|
||||||
@ -159,12 +165,12 @@ Its important to choose a unique and consistent backup name. In this case we nam
|
|||||||
On the source zfs system set the ```autobackup:offsite1``` zfs property to true:
|
On the source zfs system set the ```autobackup:offsite1``` zfs property to true:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
[root@pve ~]# zfs set autobackup:offsite1=true rpool
|
[root@server1 ~]# zfs set autobackup:offsite1=true rpool
|
||||||
[root@pve ~]# zfs get -t filesystem,volume autobackup:offsite1
|
[root@server1 ~]# zfs get -t filesystem,volume autobackup:offsite1
|
||||||
NAME PROPERTY VALUE SOURCE
|
NAME PROPERTY VALUE SOURCE
|
||||||
rpool autobackup:offsite1 true local
|
rpool autobackup:offsite1 true local
|
||||||
rpool/ROOT autobackup:offsite1 true inherited from rpool
|
rpool/ROOT autobackup:offsite1 true inherited from rpool
|
||||||
rpool/ROOT/pve-1 autobackup:offsite1 true inherited from rpool
|
rpool/ROOT/server1-1 autobackup:offsite1 true inherited from rpool
|
||||||
rpool/data autobackup:offsite1 true inherited from rpool
|
rpool/data autobackup:offsite1 true inherited from rpool
|
||||||
rpool/data/vm-100-disk-0 autobackup:offsite1 true inherited from rpool
|
rpool/data/vm-100-disk-0 autobackup:offsite1 true inherited from rpool
|
||||||
rpool/swap autobackup:offsite1 true inherited from rpool
|
rpool/swap autobackup:offsite1 true inherited from rpool
|
||||||
@ -174,12 +180,12 @@ rpool/swap autobackup:offsite1 true
|
|||||||
Because we don't want to backup everything, we can exclude certain filesystem by setting the property to false:
|
Because we don't want to backup everything, we can exclude certain filesystem by setting the property to false:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
[root@pve ~]# zfs set autobackup:offsite1=false rpool/swap
|
[root@server1 ~]# zfs set autobackup:offsite1=false rpool/swap
|
||||||
[root@pve ~]# zfs get -t filesystem,volume autobackup:offsite1
|
[root@server1 ~]# zfs get -t filesystem,volume autobackup:offsite1
|
||||||
NAME PROPERTY VALUE SOURCE
|
NAME PROPERTY VALUE SOURCE
|
||||||
rpool autobackup:offsite1 true local
|
rpool autobackup:offsite1 true local
|
||||||
rpool/ROOT autobackup:offsite1 true inherited from rpool
|
rpool/ROOT autobackup:offsite1 true inherited from rpool
|
||||||
rpool/ROOT/pve-1 autobackup:offsite1 true inherited from rpool
|
rpool/ROOT/server1-1 autobackup:offsite1 true inherited from rpool
|
||||||
rpool/data autobackup:offsite1 true inherited from rpool
|
rpool/data autobackup:offsite1 true inherited from rpool
|
||||||
rpool/data/vm-100-disk-0 autobackup:offsite1 true inherited from rpool
|
rpool/data/vm-100-disk-0 autobackup:offsite1 true inherited from rpool
|
||||||
rpool/swap autobackup:offsite1 false local
|
rpool/swap autobackup:offsite1 false local
|
||||||
@ -191,10 +197,10 @@ rpool/swap autobackup:offsite1 false
|
|||||||
Run the script on the backup server and pull the data from the server specified by --ssh-source.
|
Run the script on the backup server and pull the data from the server specified by --ssh-source.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
[root@backup ~]# zfs-autobackup --ssh-source pve.server.com offsite1 backup/pve --progress --verbose
|
[root@backup ~]# zfs-autobackup --ssh-source server1.server.com offsite1 backup/server1 --progress --verbose
|
||||||
|
|
||||||
#### Settings summary
|
#### Settings summary
|
||||||
[Source] Datasets on: pve.server.com
|
[Source] Datasets on: server1.server.com
|
||||||
[Source] Keep the last 10 snapshots.
|
[Source] Keep the last 10 snapshots.
|
||||||
[Source] Keep every 1 day, delete after 1 week.
|
[Source] Keep every 1 day, delete after 1 week.
|
||||||
[Source] Keep every 1 week, delete after 1 month.
|
[Source] Keep every 1 week, delete after 1 month.
|
||||||
@ -206,12 +212,12 @@ Run the script on the backup server and pull the data from the server specified
|
|||||||
[Target] Keep every 1 day, delete after 1 week.
|
[Target] Keep every 1 day, delete after 1 week.
|
||||||
[Target] Keep every 1 week, delete after 1 month.
|
[Target] Keep every 1 week, delete after 1 month.
|
||||||
[Target] Keep every 1 month, delete after 1 year.
|
[Target] Keep every 1 month, delete after 1 year.
|
||||||
[Target] Receive datasets under: backup/pve
|
[Target] Receive datasets under: backup/server1
|
||||||
|
|
||||||
#### Selecting
|
#### Selecting
|
||||||
[Source] rpool: Selected (direct selection)
|
[Source] rpool: Selected (direct selection)
|
||||||
[Source] rpool/ROOT: Selected (inherited selection)
|
[Source] rpool/ROOT: Selected (inherited selection)
|
||||||
[Source] rpool/ROOT/pve-1: Selected (inherited selection)
|
[Source] rpool/ROOT/server1-1: Selected (inherited selection)
|
||||||
[Source] rpool/data: Selected (inherited selection)
|
[Source] rpool/data: Selected (inherited selection)
|
||||||
[Source] rpool/data/vm-100-disk-0: Selected (inherited selection)
|
[Source] rpool/data/vm-100-disk-0: Selected (inherited selection)
|
||||||
[Source] rpool/swap: Ignored (disabled)
|
[Source] rpool/swap: Ignored (disabled)
|
||||||
@ -223,13 +229,13 @@ Run the script on the backup server and pull the data from the server specified
|
|||||||
[Source] Creating snapshot offsite1-20200218180123
|
[Source] Creating snapshot offsite1-20200218180123
|
||||||
|
|
||||||
#### Sending and thinning
|
#### Sending and thinning
|
||||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218175435: receiving full
|
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218175435: receiving full
|
||||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218175547: receiving incremental
|
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218175547: receiving incremental
|
||||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218175706: receiving incremental
|
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218175706: receiving incremental
|
||||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218180049: receiving incremental
|
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218180049: receiving incremental
|
||||||
[Target] backup/pve/rpool/ROOT/pve-1@offsite1-20200218180123: receiving incremental
|
[Target] backup/server1/rpool/ROOT/server1-1@offsite1-20200218180123: receiving incremental
|
||||||
[Target] backup/pve/rpool/data@offsite1-20200218175435: receiving full
|
[Target] backup/server1/rpool/data@offsite1-20200218175435: receiving full
|
||||||
[Target] backup/pve/rpool/data/vm-100-disk-0@offsite1-20200218175435: receiving full
|
[Target] backup/server1/rpool/data/vm-100-disk-0@offsite1-20200218175435: receiving full
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -247,7 +253,45 @@ Once you've got the correct settings for your situation, you can just store the
|
|||||||
|
|
||||||
Or just create a script and run it manually when you need it.
|
Or just create a script and run it manually when you need it.
|
||||||
|
|
||||||
### Thinning out obsolete snapshots
|
## Use as snapshot tool
|
||||||
|
|
||||||
|
You can use zfs-autobackup to only make snapshots.
|
||||||
|
|
||||||
|
Just dont specify the target-path:
|
||||||
|
```console
|
||||||
|
root@ws1:~# zfs-autobackup test --verbose
|
||||||
|
zfs-autobackup v3.0-rc12 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
||||||
|
|
||||||
|
#### Source settings
|
||||||
|
[Source] Datasets are local
|
||||||
|
[Source] Keep the last 10 snapshots.
|
||||||
|
[Source] Keep every 1 day, delete after 1 week.
|
||||||
|
[Source] Keep every 1 week, delete after 1 month.
|
||||||
|
[Source] Keep every 1 month, delete after 1 year.
|
||||||
|
[Source] Selects all datasets that have property 'autobackup:test=true' (or childs of datasets that have 'autobackup:test=child')
|
||||||
|
|
||||||
|
#### Selecting
|
||||||
|
[Source] test_source1/fs1: Selected (direct selection)
|
||||||
|
[Source] test_source1/fs1/sub: Selected (inherited selection)
|
||||||
|
[Source] test_source2/fs2: Ignored (only childs)
|
||||||
|
[Source] test_source2/fs2/sub: Selected (inherited selection)
|
||||||
|
|
||||||
|
#### Snapshotting
|
||||||
|
[Source] Creating snapshots test-20200710125958 in pool test_source1
|
||||||
|
[Source] Creating snapshots test-20200710125958 in pool test_source2
|
||||||
|
|
||||||
|
#### Thinning source
|
||||||
|
[Source] test_source1/fs1@test-20200710125948: Destroying
|
||||||
|
[Source] test_source1/fs1/sub@test-20200710125948: Destroying
|
||||||
|
[Source] test_source2/fs2/sub@test-20200710125948: Destroying
|
||||||
|
|
||||||
|
#### All operations completed successfully
|
||||||
|
(No target_path specified, only operated as snapshot tool.)
|
||||||
|
```
|
||||||
|
|
||||||
|
This also allows you to make several snapshots during the day, but only backup the data at night when the server is not busy.
|
||||||
|
|
||||||
|
## Thinning out obsolete snapshots
|
||||||
|
|
||||||
The thinner is the thing that destroys old snapshots on the source and target.
|
The thinner is the thing that destroys old snapshots on the source and target.
|
||||||
|
|
||||||
@ -255,7 +299,25 @@ The thinner operates "stateless": There is nothing in the name or properties of
|
|||||||
|
|
||||||
Note that the thinner will ONLY destroy snapshots that are matching the naming pattern of zfs-autobackup. If you use `--other-snapshots`, it wont destroy those snapshots after replicating them to the target.
|
Note that the thinner will ONLY destroy snapshots that are matching the naming pattern of zfs-autobackup. If you use `--other-snapshots`, it wont destroy those snapshots after replicating them to the target.
|
||||||
|
|
||||||
#### Thinning schedule
|
### Destroying missing datasets
|
||||||
|
|
||||||
|
When a dataset has been destroyed or deselected on the source, but still exists on the target we call it a missing dataset. Missing datasets will be still thinned out according to the schedule.
|
||||||
|
|
||||||
|
The final snapshot will never be destroyed, unless you specify a **deadline** with the `--destroy-missing` option:
|
||||||
|
|
||||||
|
In that case it will look at the last snapshot we took and determine if is older than the deadline you specified. e.g: `--destroy-missing 30d` will start destroying things 30 days after the last snapshot.
|
||||||
|
|
||||||
|
#### After the deadline
|
||||||
|
|
||||||
|
When the deadline is passed, all our snapshots, except the last one will be destroyed. Irregardless of the normal thinning schedule.
|
||||||
|
|
||||||
|
The dataset has to have the following properties to be finally really destroyed:
|
||||||
|
|
||||||
|
* The dataset has no direct child-filesystems or volumes.
|
||||||
|
* The only snapshot left is the last one created by zfs-autobackup.
|
||||||
|
* The remaining snapshot has no clones.
|
||||||
|
|
||||||
|
### Thinning schedule
|
||||||
|
|
||||||
The default thinning schedule is: `10,1d1w,1w1m,1m1y`.
|
The default thinning schedule is: `10,1d1w,1w1m,1m1y`.
|
||||||
|
|
||||||
@ -296,7 +358,7 @@ If you want to keep as few snapshots as possible, just specify 0. (`--keep-sourc
|
|||||||
|
|
||||||
If you want to keep ALL the snapshots, just specify a very high number.
|
If you want to keep ALL the snapshots, just specify a very high number.
|
||||||
|
|
||||||
#### More details about the Thinner
|
### More details about the Thinner
|
||||||
|
|
||||||
We will give a practical example of how the thinner operates.
|
We will give a practical example of how the thinner operates.
|
||||||
|
|
||||||
@ -328,11 +390,10 @@ Snapshots on the source that still have to be send to the target wont be destroy
|
|||||||
## Tips
|
## Tips
|
||||||
|
|
||||||
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
|
* Use ```--debug``` if something goes wrong and you want to see the commands that are executed. This will also stop at the first error.
|
||||||
* You can split up the snapshotting and sending tasks by creating two cronjobs. Use ```--no-send``` for the snapshotter-cronjob and use ```--no-snapshot``` for the send-cronjob. This is useful if you only want to send at night or if your send take too long.
|
* You can split up the snapshotting and sending tasks by creating two cronjobs. Create a separate snapshotter-cronjob by just omitting target-path.
|
||||||
* Set the ```readonly``` property of the target filesystem to ```on```. This prevents changes on the target side. (Normally, if there are changes the next backup will fail and will require a zfs rollback.) Note that readonly means you cant change the CONTENTS of the dataset directly. Its still possible to receive new datasets and manipulate properties etc.
|
* Set the ```readonly``` property of the target filesystem to ```on```. This prevents changes on the target side. (Normally, if there are changes the next backup will fail and will require a zfs rollback.) Note that readonly means you cant change the CONTENTS of the dataset directly. Its still possible to receive new datasets and manipulate properties etc.
|
||||||
* Use ```--clear-refreservation``` to save space on your backup server.
|
* Use ```--clear-refreservation``` to save space on your backup server.
|
||||||
* Use ```--clear-mountpoint``` to prevent the target server from mounting the backupped filesystem in the wrong place during a reboot.
|
* Use ```--clear-mountpoint``` to prevent the target server from mounting the backupped filesystem in the wrong place during a reboot.
|
||||||
* Use ```--resume``` to be able to resume aborted backups. (not all zfs versions support this)
|
|
||||||
|
|
||||||
### Speeding up SSH
|
### Speeding up SSH
|
||||||
|
|
||||||
@ -378,22 +439,24 @@ usage: zfs-autobackup [-h] [--ssh-config SSH_CONFIG] [--ssh-source SSH_SOURCE]
|
|||||||
[--keep-target KEEP_TARGET] [--other-snapshots]
|
[--keep-target KEEP_TARGET] [--other-snapshots]
|
||||||
[--no-snapshot] [--no-send] [--min-change MIN_CHANGE]
|
[--no-snapshot] [--no-send] [--min-change MIN_CHANGE]
|
||||||
[--allow-empty] [--ignore-replicated] [--no-holds]
|
[--allow-empty] [--ignore-replicated] [--no-holds]
|
||||||
[--resume] [--strip-path STRIP_PATH]
|
[--strip-path STRIP_PATH] [--clear-refreservation]
|
||||||
[--clear-refreservation] [--clear-mountpoint]
|
[--clear-mountpoint]
|
||||||
[--filter-properties FILTER_PROPERTIES]
|
[--filter-properties FILTER_PROPERTIES]
|
||||||
[--set-properties SET_PROPERTIES] [--rollback]
|
[--set-properties SET_PROPERTIES] [--rollback]
|
||||||
[--destroy-incompatible] [--ignore-transfer-errors]
|
[--destroy-incompatible] [--ignore-transfer-errors]
|
||||||
[--raw] [--test] [--verbose] [--debug] [--debug-output]
|
[--raw] [--test] [--verbose] [--debug] [--debug-output]
|
||||||
[--progress]
|
[--progress]
|
||||||
backup_name target_path
|
backup-name [target-path]
|
||||||
|
|
||||||
zfs-autobackup v3.0-rc8 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
zfs-autobackup v3.0-rc12 - Copyright 2020 E.H.Eefting (edwin@datux.nl)
|
||||||
|
|
||||||
positional arguments:
|
positional arguments:
|
||||||
backup_name Name of the backup (you should set the zfs property
|
backup-name Name of the backup (you should set the zfs property
|
||||||
"autobackup:backup-name" to true on filesystems you
|
"autobackup:backup-name" to true on filesystems you
|
||||||
want to backup
|
want to backup
|
||||||
target_path Target ZFS filesystem
|
target-path Target ZFS filesystem (optional: if not specified,
|
||||||
|
zfs-autobackup will only operate as snapshot-tool on
|
||||||
|
source)
|
||||||
|
|
||||||
optional arguments:
|
optional arguments:
|
||||||
-h, --help show this help message and exit
|
-h, --help show this help message and exit
|
||||||
@ -413,10 +476,10 @@ optional arguments:
|
|||||||
10,1d1w,1w1m,1m1y
|
10,1d1w,1w1m,1m1y
|
||||||
--other-snapshots Send over other snapshots as well, not just the ones
|
--other-snapshots Send over other snapshots as well, not just the ones
|
||||||
created by this tool.
|
created by this tool.
|
||||||
--no-snapshot Dont create new snapshots (useful for finishing
|
--no-snapshot Don't create new snapshots (useful for finishing
|
||||||
uncompleted backups, or cleanups)
|
uncompleted backups, or cleanups)
|
||||||
--no-send Dont send snapshots (useful for cleanups, or if you
|
--no-send Don't send snapshots (useful for cleanups, or if you
|
||||||
want a separate send-cronjob)
|
want a serperate send-cronjob)
|
||||||
--min-change MIN_CHANGE
|
--min-change MIN_CHANGE
|
||||||
Number of bytes written after which we consider a
|
Number of bytes written after which we consider a
|
||||||
dataset changed (default 1)
|
dataset changed (default 1)
|
||||||
@ -425,17 +488,11 @@ optional arguments:
|
|||||||
--ignore-replicated Ignore datasets that seem to be replicated some other
|
--ignore-replicated Ignore datasets that seem to be replicated some other
|
||||||
way. (No changes since lastest snapshot. Useful for
|
way. (No changes since lastest snapshot. Useful for
|
||||||
proxmox HA replication)
|
proxmox HA replication)
|
||||||
--no-holds Dont lock snapshots on the source. (Useful to allow
|
--no-holds Don't lock snapshots on the source. (Useful to allow
|
||||||
proxmox HA replication to switches nodes)
|
proxmox HA replication to switches nodes)
|
||||||
--resume Support resuming of interrupted transfers by using the
|
|
||||||
zfs extensible_dataset feature (both zpools should
|
|
||||||
have it enabled) Disadvantage is that you need to use
|
|
||||||
zfs recv -A if another snapshot is created on the
|
|
||||||
target during a receive. Otherwise it will keep
|
|
||||||
failing.
|
|
||||||
--strip-path STRIP_PATH
|
--strip-path STRIP_PATH
|
||||||
Number of directory to strip from path (use 1 when
|
Number of directories to strip from target path (use 1
|
||||||
cloning zones between 2 SmartOS machines)
|
when cloning zones between 2 SmartOS machines)
|
||||||
--clear-refreservation
|
--clear-refreservation
|
||||||
Filter "refreservation" property. (recommended, safes
|
Filter "refreservation" property. (recommended, safes
|
||||||
space. same as --filter-properties refreservation)
|
space. same as --filter-properties refreservation)
|
||||||
@ -447,7 +504,7 @@ optional arguments:
|
|||||||
filesystems. (you can still restore them with zfs
|
filesystems. (you can still restore them with zfs
|
||||||
inherit -S)
|
inherit -S)
|
||||||
--set-properties SET_PROPERTIES
|
--set-properties SET_PROPERTIES
|
||||||
List of properties to override when receiving
|
List of propererties to override when receiving
|
||||||
filesystems. (you can still restore them with zfs
|
filesystems. (you can still restore them with zfs
|
||||||
inherit -S)
|
inherit -S)
|
||||||
--rollback Rollback changes to the latest target snapshot before
|
--rollback Rollback changes to the latest target snapshot before
|
||||||
@ -467,7 +524,8 @@ optional arguments:
|
|||||||
--debug Show zfs commands that are executed, stops after an
|
--debug Show zfs commands that are executed, stops after an
|
||||||
exception.
|
exception.
|
||||||
--debug-output Show zfs commands and their output/exit codes. (noisy)
|
--debug-output Show zfs commands and their output/exit codes. (noisy)
|
||||||
--progress show zfs progress output (to stderr)
|
--progress show zfs progress output (to stderr). Enabled by
|
||||||
|
default on ttys.
|
||||||
|
|
||||||
When a filesystem fails, zfs_backup will continue and report the number of
|
When a filesystem fails, zfs_backup will continue and report the number of
|
||||||
failures at that end. Also the exit code will indicate the number of failures.
|
failures at that end. Also the exit code will indicate the number of failures.
|
||||||
@ -481,10 +539,7 @@ You forgot to setup automatic login via SSH keys, look in the example how to do
|
|||||||
|
|
||||||
### It says 'cannot receive incremental stream: invalid backup stream'
|
### It says 'cannot receive incremental stream: invalid backup stream'
|
||||||
|
|
||||||
This usually means you've created a new snapshot on the target side during a backup:
|
This usually means you've created a new snapshot on the target side during a backup. If you restart zfs-autobackup, it will automaticly abort the invalid partially received snapshot and start over.
|
||||||
|
|
||||||
* Solution 1: Restart zfs-autobackup and make sure you don't use --resume. If you did use --resume, be sure to "abort" the receive on the target side with zfs recv -A.
|
|
||||||
* Solution 2: Destroy the newly created snapshot and restart zfs-autobackup.
|
|
||||||
|
|
||||||
### It says 'internal error: Invalid argument'
|
### It says 'internal error: Invalid argument'
|
||||||
|
|
||||||
@ -552,12 +607,22 @@ I use the following backup script on the backup server:
|
|||||||
for H in h4 h5 h6; do
|
for H in h4 h5 h6; do
|
||||||
echo "################################### DATA $H"
|
echo "################################### DATA $H"
|
||||||
#backup data filesystems to a common place
|
#backup data filesystems to a common place
|
||||||
./zfs-autobackup --ssh-source root@$H data_smartos03 zones/backup/zfsbackups/pxe1_data --clear-refreservation --clear-mountpoint --ignore-transfer-errors --strip-path 2 --verbose --resume --ignore-replicated --min-change 200000 --no-holds $@
|
./zfs-autobackup --ssh-source root@$H data_smartos03 zones/backup/zfsbackups/pxe1_data --clear-refreservation --clear-mountpoint --ignore-transfer-errors --strip-path 2 --verbose --ignore-replicated --min-change 200000 --no-holds $@
|
||||||
zabbix-job-status backup_$H""_data_smartos03 daily $? >/dev/null 2>/dev/null
|
zabbix-job-status backup_$H""_data_smartos03 daily $? >/dev/null 2>/dev/null
|
||||||
|
|
||||||
echo "################################### RPOOL $H"
|
echo "################################### RPOOL $H"
|
||||||
#backup rpool to own place
|
#backup rpool to own place
|
||||||
./zfs-autobackup --ssh-source root@$H $H""_smartos03 zones/backup/zfsbackups/$H --verbose --clear-refreservation --clear-mountpoint --resume --ignore-transfer-errors $@
|
./zfs-autobackup --ssh-source root@$H $H""_smartos03 zones/backup/zfsbackups/$H --verbose --clear-refreservation --clear-mountpoint --ignore-transfer-errors $@
|
||||||
zabbix-job-status backup_$H""_smartos03 daily $? >/dev/null 2>/dev/null
|
zabbix-job-status backup_$H""_smartos03 daily $? >/dev/null 2>/dev/null
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Sponsor list
|
||||||
|
|
||||||
|
This project was sponsorred by:
|
||||||
|
|
||||||
|
* (None so far)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -26,8 +26,8 @@ if sys.stdout.isatty():
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
VERSION="3.0-rc11"
|
VERSION="3.0"
|
||||||
HEADER="zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)\n".format(VERSION)
|
HEADER="zfs-autobackup v{} - Copyright 2020 E.H.Eefting (edwin@datux.nl)".format(VERSION)
|
||||||
|
|
||||||
class Log:
|
class Log:
|
||||||
def __init__(self, show_debug=False, show_verbose=False):
|
def __init__(self, show_debug=False, show_verbose=False):
|
||||||
@ -700,10 +700,11 @@ class ZfsDataset():
|
|||||||
|
|
||||||
self.verbose("Destroying")
|
self.verbose("Destroying")
|
||||||
|
|
||||||
self.release()
|
if self.is_snapshot:
|
||||||
|
self.release()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.zfs_node.run(["zfs", "destroy", "-d", self.name])
|
self.zfs_node.run(["zfs", "destroy", self.name])
|
||||||
self.invalidate()
|
self.invalidate()
|
||||||
self.force_exists=False
|
self.force_exists=False
|
||||||
return(True)
|
return(True)
|
||||||
@ -898,12 +899,25 @@ class ZfsDataset():
|
|||||||
|
|
||||||
@cached_property
|
@cached_property
|
||||||
def recursive_datasets(self, types="filesystem,volume"):
|
def recursive_datasets(self, types="filesystem,volume"):
|
||||||
"""get all datasets recursively under us"""
|
"""get all (non-snapshot) datasets recursively under us"""
|
||||||
|
|
||||||
|
self.debug("Getting all recursive datasets under us")
|
||||||
|
|
||||||
|
names=self.zfs_node.run(tab_split=False, readonly=True, valid_exitcodes=[ 0 ], cmd=[
|
||||||
|
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", self.name
|
||||||
|
])
|
||||||
|
|
||||||
|
return(self.from_names(names[1:]))
|
||||||
|
|
||||||
|
|
||||||
|
@cached_property
|
||||||
|
def datasets(self, types="filesystem,volume"):
|
||||||
|
"""get all (non-snapshot) datasets directly under us"""
|
||||||
|
|
||||||
self.debug("Getting all datasets under us")
|
self.debug("Getting all datasets under us")
|
||||||
|
|
||||||
names=self.zfs_node.run(tab_split=False, readonly=True, valid_exitcodes=[ 0 ], cmd=[
|
names=self.zfs_node.run(tab_split=False, readonly=True, valid_exitcodes=[ 0 ], cmd=[
|
||||||
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", self.name
|
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", "-d", "1", self.name
|
||||||
])
|
])
|
||||||
|
|
||||||
return(self.from_names(names[1:]))
|
return(self.from_names(names[1:]))
|
||||||
@ -1093,13 +1107,16 @@ class ZfsDataset():
|
|||||||
return(self.zfs_node.thinner.thin(snapshots, keep_objects=keeps))
|
return(self.zfs_node.thinner.thin(snapshots, keep_objects=keeps))
|
||||||
|
|
||||||
|
|
||||||
def thin(self):
|
def thin(self, skip_holds=False):
|
||||||
"""destroys snapshots according to thin_list, except last snapshot"""
|
"""destroys snapshots according to thin_list, except last snapshot"""
|
||||||
|
|
||||||
(keeps, obsoletes)=self.thin_list(keeps=self.our_snapshots[-1:])
|
(keeps, obsoletes)=self.thin_list(keeps=self.our_snapshots[-1:])
|
||||||
for obsolete in obsoletes:
|
for obsolete in obsoletes:
|
||||||
obsolete.destroy()
|
if skip_holds and obsolete.is_hold():
|
||||||
self.snapshots.remove(obsolete)
|
obsolete.verbose("Keeping (common snapshot)")
|
||||||
|
else:
|
||||||
|
obsolete.destroy()
|
||||||
|
self.snapshots.remove(obsolete)
|
||||||
|
|
||||||
|
|
||||||
def find_common_snapshot(self, target_dataset):
|
def find_common_snapshot(self, target_dataset):
|
||||||
@ -1531,7 +1548,7 @@ class ZfsNode(ExecuteNode):
|
|||||||
selected_filesystems.append(dataset)
|
selected_filesystems.append(dataset)
|
||||||
dataset.verbose("Selected (inherited selection)")
|
dataset.verbose("Selected (inherited selection)")
|
||||||
else:
|
else:
|
||||||
dataset.verbose("Ignored (already a backup)")
|
dataset.debug("Ignored (already a backup)")
|
||||||
else:
|
else:
|
||||||
dataset.verbose("Ignored (only childs)")
|
dataset.verbose("Ignored (only childs)")
|
||||||
|
|
||||||
@ -1551,8 +1568,8 @@ class ZfsAutobackup:
|
|||||||
parser.add_argument('--keep-source', type=str, default="10,1d1w,1w1m,1m1y", help='Thinning schedule for old source snapshots. Default: %(default)s')
|
parser.add_argument('--keep-source', type=str, default="10,1d1w,1w1m,1m1y", help='Thinning schedule for old source snapshots. Default: %(default)s')
|
||||||
parser.add_argument('--keep-target', type=str, default="10,1d1w,1w1m,1m1y", help='Thinning schedule for old target snapshots. Default: %(default)s')
|
parser.add_argument('--keep-target', type=str, default="10,1d1w,1w1m,1m1y", help='Thinning schedule for old target snapshots. Default: %(default)s')
|
||||||
|
|
||||||
parser.add_argument('backup_name', help='Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup')
|
parser.add_argument('backup_name', metavar='backup-name', help='Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup')
|
||||||
parser.add_argument('target_path', help='Target ZFS filesystem')
|
parser.add_argument('target_path', metavar='target-path', default=None, nargs='?', help='Target ZFS filesystem (optional: if not specified, zfs-autobackup will only operate as snapshot-tool on source)')
|
||||||
|
|
||||||
parser.add_argument('--other-snapshots', action='store_true', help='Send over other snapshots as well, not just the ones created by this tool.')
|
parser.add_argument('--other-snapshots', action='store_true', help='Send over other snapshots as well, not just the ones created by this tool.')
|
||||||
parser.add_argument('--no-snapshot', action='store_true', help='Don\'t create new snapshots (useful for finishing uncompleted backups, or cleanups)')
|
parser.add_argument('--no-snapshot', action='store_true', help='Don\'t create new snapshots (useful for finishing uncompleted backups, or cleanups)')
|
||||||
@ -1569,13 +1586,13 @@ class ZfsAutobackup:
|
|||||||
# parser.add_argument('--buffer', default="", help='Use mbuffer with specified size to speedup zfs transfer. (e.g. --buffer 1G) Will also show nice progress output.')
|
# parser.add_argument('--buffer', default="", help='Use mbuffer with specified size to speedup zfs transfer. (e.g. --buffer 1G) Will also show nice progress output.')
|
||||||
|
|
||||||
|
|
||||||
# parser.add_argument('--destroy-stale', action='store_true', help='Destroy stale backups that have no more snapshots. Be sure to verify the output before using this! ')
|
|
||||||
parser.add_argument('--clear-refreservation', action='store_true', help='Filter "refreservation" property. (recommended, safes space. same as --filter-properties refreservation)')
|
parser.add_argument('--clear-refreservation', action='store_true', help='Filter "refreservation" property. (recommended, safes space. same as --filter-properties refreservation)')
|
||||||
parser.add_argument('--clear-mountpoint', action='store_true', help='Set property canmount=noauto for new datasets. (recommended, prevents mount conflicts. same as --set-properties canmount=noauto)')
|
parser.add_argument('--clear-mountpoint', action='store_true', help='Set property canmount=noauto for new datasets. (recommended, prevents mount conflicts. same as --set-properties canmount=noauto)')
|
||||||
parser.add_argument('--filter-properties', type=str, help='List of properties to "filter" when receiving filesystems. (you can still restore them with zfs inherit -S)')
|
parser.add_argument('--filter-properties', type=str, help='List of properties to "filter" when receiving filesystems. (you can still restore them with zfs inherit -S)')
|
||||||
parser.add_argument('--set-properties', type=str, help='List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S)')
|
parser.add_argument('--set-properties', type=str, help='List of propererties to override when receiving filesystems. (you can still restore them with zfs inherit -S)')
|
||||||
parser.add_argument('--rollback', action='store_true', help='Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on)')
|
parser.add_argument('--rollback', action='store_true', help='Rollback changes to the latest target snapshot before starting. (normally you can prevent changes by setting the readonly property on the target_path to on)')
|
||||||
parser.add_argument('--destroy-incompatible', action='store_true', help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
|
parser.add_argument('--destroy-incompatible', action='store_true', help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
|
||||||
|
parser.add_argument('--destroy-missing', type=str, default=None, help='Destroy datasets on target that are missing on the source. Specify the time since the last snapshot, e.g: --destroy-missing 30d')
|
||||||
parser.add_argument('--ignore-transfer-errors', action='store_true', help='Ignore transfer errors (still checks if received filesystem exists. useful for acltype errors)')
|
parser.add_argument('--ignore-transfer-errors', action='store_true', help='Ignore transfer errors (still checks if received filesystem exists. useful for acltype errors)')
|
||||||
parser.add_argument('--raw', action='store_true', help='For encrypted datasets, send data exactly as it exists on disk.')
|
parser.add_argument('--raw', action='store_true', help='For encrypted datasets, send data exactly as it exists on disk.')
|
||||||
|
|
||||||
@ -1625,6 +1642,146 @@ class ZfsAutobackup:
|
|||||||
self.log.verbose("")
|
self.log.verbose("")
|
||||||
self.log.verbose("#### "+title)
|
self.log.verbose("#### "+title)
|
||||||
|
|
||||||
|
# sync datasets, or thin-only on both sides
|
||||||
|
# target is needed for this.
|
||||||
|
def sync_datasets(self, source_node, source_datasets):
|
||||||
|
|
||||||
|
description="[Target]"
|
||||||
|
|
||||||
|
self.set_title("Target settings")
|
||||||
|
|
||||||
|
target_thinner=Thinner(self.args.keep_target)
|
||||||
|
target_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_target, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=target_thinner)
|
||||||
|
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
||||||
|
|
||||||
|
if self.args.no_send:
|
||||||
|
self.set_title("Thinning source and target")
|
||||||
|
else:
|
||||||
|
self.set_title("Sending and thinning")
|
||||||
|
|
||||||
|
#check if exists, to prevent vague errors
|
||||||
|
target_dataset=ZfsDataset(target_node, self.args.target_path)
|
||||||
|
if not target_dataset.exists:
|
||||||
|
self.error("Target path '{}' does not exist. Please create this dataset first.".format(target_dataset))
|
||||||
|
return(255)
|
||||||
|
|
||||||
|
|
||||||
|
if self.args.filter_properties:
|
||||||
|
filter_properties=self.args.filter_properties.split(",")
|
||||||
|
else:
|
||||||
|
filter_properties=[]
|
||||||
|
|
||||||
|
if self.args.set_properties:
|
||||||
|
set_properties=self.args.set_properties.split(",")
|
||||||
|
else:
|
||||||
|
set_properties=[]
|
||||||
|
|
||||||
|
if self.args.clear_refreservation:
|
||||||
|
filter_properties.append("refreservation")
|
||||||
|
|
||||||
|
if self.args.clear_mountpoint:
|
||||||
|
set_properties.append("canmount=noauto")
|
||||||
|
|
||||||
|
#sync datasets
|
||||||
|
fail_count=0
|
||||||
|
target_datasets=[]
|
||||||
|
for source_dataset in source_datasets:
|
||||||
|
|
||||||
|
try:
|
||||||
|
#determine corresponding target_dataset
|
||||||
|
target_name=self.args.target_path + "/" + source_dataset.lstrip_path(self.args.strip_path)
|
||||||
|
target_dataset=ZfsDataset(target_node, target_name)
|
||||||
|
target_datasets.append(target_dataset)
|
||||||
|
|
||||||
|
#ensure parents exists
|
||||||
|
#TODO: this isnt perfect yet, in some cases it can create parents when it shouldn't.
|
||||||
|
if not self.args.no_send and not target_dataset.parent in target_datasets and not target_dataset.parent.exists:
|
||||||
|
target_dataset.parent.create_filesystem(parents=True)
|
||||||
|
|
||||||
|
#determine common zpool features
|
||||||
|
source_features=source_node.get_zfs_pool(source_dataset.split_path()[0]).features
|
||||||
|
target_features=target_node.get_zfs_pool(target_dataset.split_path()[0]).features
|
||||||
|
common_features=source_features and target_features
|
||||||
|
# source_dataset.debug("Common features: {}".format(common_features))
|
||||||
|
|
||||||
|
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress, features=common_features, filter_properties=filter_properties, set_properties=set_properties, ignore_recv_exit_code=self.args.ignore_transfer_errors, source_holds= not self.args.no_holds, rollback=self.args.rollback, raw=self.args.raw, other_snapshots=self.args.other_snapshots, no_send=self.args.no_send, destroy_incompatible=self.args.destroy_incompatible)
|
||||||
|
except Exception as e:
|
||||||
|
fail_count=fail_count+1
|
||||||
|
source_dataset.error("FAILED: "+str(e))
|
||||||
|
if self.args.debug:
|
||||||
|
raise
|
||||||
|
|
||||||
|
self.thin_missing_targets(ZfsDataset(target_node, self.args.target_path), target_datasets)
|
||||||
|
|
||||||
|
|
||||||
|
return(fail_count)
|
||||||
|
|
||||||
|
|
||||||
|
def thin_missing_targets(self, target_dataset, used_target_datasets):
|
||||||
|
"""thin/destroy target datasets that are missing on the source."""
|
||||||
|
|
||||||
|
self.debug("Thinning obsolete datasets")
|
||||||
|
|
||||||
|
for dataset in target_dataset.recursive_datasets:
|
||||||
|
try:
|
||||||
|
if dataset not in used_target_datasets:
|
||||||
|
dataset.debug("Missing on source, thinning")
|
||||||
|
dataset.thin()
|
||||||
|
|
||||||
|
#destroy_missing enabled?
|
||||||
|
if self.args.destroy_missing!=None:
|
||||||
|
|
||||||
|
#cant do anything without our own snapshots
|
||||||
|
if not dataset.our_snapshots:
|
||||||
|
if dataset.datasets:
|
||||||
|
dataset.debug("Destroy missing: ignoring")
|
||||||
|
else:
|
||||||
|
dataset.verbose("Destroy missing: has no snapshots made by us. (please destroy manually)")
|
||||||
|
else:
|
||||||
|
#past the deadline?
|
||||||
|
deadline_ttl=ThinnerRule("0s"+self.args.destroy_missing).ttl
|
||||||
|
now=int(time.time())
|
||||||
|
if dataset.our_snapshots[-1].timestamp + deadline_ttl > now:
|
||||||
|
dataset.verbose("Destroy missing: Waiting for deadline.")
|
||||||
|
else:
|
||||||
|
|
||||||
|
dataset.debug("Destroy missing: Removing our snapshots.")
|
||||||
|
|
||||||
|
#remove all our snaphots, except last, to safe space in case we fail later on
|
||||||
|
for snapshot in dataset.our_snapshots[:-1]:
|
||||||
|
snapshot.destroy(fail_exception=True)
|
||||||
|
|
||||||
|
#does it have other snapshots?
|
||||||
|
has_others=False
|
||||||
|
for snapshot in dataset.snapshots:
|
||||||
|
if not snapshot.is_ours():
|
||||||
|
has_others=True
|
||||||
|
break
|
||||||
|
|
||||||
|
if has_others:
|
||||||
|
dataset.verbose("Destroy missing: Still in use by other snapshots")
|
||||||
|
else:
|
||||||
|
if dataset.datasets:
|
||||||
|
dataset.verbose("Destroy missing: Still has children here.")
|
||||||
|
else:
|
||||||
|
dataset.verbose("Destroy missing.")
|
||||||
|
dataset.our_snapshots[-1].destroy(fail_exception=True)
|
||||||
|
dataset.destroy(fail_exception=True)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
dataset.error("Error during destoy missing ({})".format(str(e)))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def thin_source(self, source_datasets):
|
||||||
|
|
||||||
|
self.set_title("Thinning source")
|
||||||
|
|
||||||
|
for source_dataset in source_datasets:
|
||||||
|
source_dataset.thin(skip_holds=True)
|
||||||
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@ -1633,30 +1790,21 @@ class ZfsAutobackup:
|
|||||||
if self.args.test:
|
if self.args.test:
|
||||||
self.verbose("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
|
self.verbose("TEST MODE - SIMULATING WITHOUT MAKING ANY CHANGES")
|
||||||
|
|
||||||
self.set_title("Settings summary")
|
self.set_title("Source settings")
|
||||||
|
|
||||||
description="[Source]"
|
description="[Source]"
|
||||||
source_thinner=Thinner(self.args.keep_source)
|
source_thinner=Thinner(self.args.keep_source)
|
||||||
source_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_source, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=source_thinner)
|
source_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_source, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=source_thinner)
|
||||||
source_node.verbose("Send all datasets that have 'autobackup:{}=true' or 'autobackup:{}=child'".format(self.args.backup_name, self.args.backup_name))
|
source_node.verbose("Selects all datasets that have property 'autobackup:{}=true' (or childs of datasets that have 'autobackup:{}=child')".format(self.args.backup_name, self.args.backup_name))
|
||||||
|
|
||||||
self.verbose("")
|
|
||||||
|
|
||||||
description="[Target]"
|
|
||||||
target_thinner=Thinner(self.args.keep_target)
|
|
||||||
target_node=ZfsNode(self.args.backup_name, self, ssh_config=self.args.ssh_config, ssh_to=self.args.ssh_target, readonly=self.args.test, debug_output=self.args.debug_output, description=description, thinner=target_thinner)
|
|
||||||
target_node.verbose("Receive datasets under: {}".format(self.args.target_path))
|
|
||||||
|
|
||||||
|
|
||||||
self.set_title("Selecting")
|
self.set_title("Selecting")
|
||||||
selected_source_datasets=source_node.selected_datasets
|
selected_source_datasets=source_node.selected_datasets
|
||||||
if not selected_source_datasets:
|
if not selected_source_datasets:
|
||||||
self.error("No source filesystems selected, please do a 'zfs set autobackup:{0}=true' on the source datasets you want to backup.".format(self.args.backup_name))
|
self.error("No source filesystems selected, please do a 'zfs set autobackup:{0}=true' on the source datasets you want to select.".format(self.args.backup_name))
|
||||||
return(255)
|
return(255)
|
||||||
|
|
||||||
source_datasets=[]
|
source_datasets=[]
|
||||||
|
|
||||||
|
|
||||||
#filter out already replicated stuff?
|
#filter out already replicated stuff?
|
||||||
if not self.args.ignore_replicated:
|
if not self.args.ignore_replicated:
|
||||||
source_datasets=selected_source_datasets
|
source_datasets=selected_source_datasets
|
||||||
@ -1668,80 +1816,34 @@ class ZfsAutobackup:
|
|||||||
else:
|
else:
|
||||||
selected_source_dataset.verbose("Ignoring, already replicated")
|
selected_source_dataset.verbose("Ignoring, already replicated")
|
||||||
|
|
||||||
|
|
||||||
if not self.args.no_snapshot:
|
if not self.args.no_snapshot:
|
||||||
self.set_title("Snapshotting")
|
self.set_title("Snapshotting")
|
||||||
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), min_changed_bytes=self.args.min_change)
|
source_node.consistent_snapshot(source_datasets, source_node.new_snapshotname(), min_changed_bytes=self.args.min_change)
|
||||||
|
|
||||||
|
#if target is specified, we sync the datasets, otherwise we just thin the source. (e.g. snapshot mode)
|
||||||
if self.args.no_send:
|
if self.args.target_path:
|
||||||
self.set_title("Thinning")
|
fail_count=self.sync_datasets(source_node, source_datasets)
|
||||||
else:
|
else:
|
||||||
self.set_title("Sending and thinning")
|
self.thin_source(source_datasets)
|
||||||
|
fail_count=0
|
||||||
if self.args.filter_properties:
|
|
||||||
filter_properties=self.args.filter_properties.split(",")
|
|
||||||
else:
|
|
||||||
filter_properties=[]
|
|
||||||
|
|
||||||
if self.args.set_properties:
|
|
||||||
set_properties=self.args.set_properties.split(",")
|
|
||||||
else:
|
|
||||||
set_properties=[]
|
|
||||||
|
|
||||||
if self.args.clear_refreservation:
|
|
||||||
filter_properties.append("refreservation")
|
|
||||||
|
|
||||||
if self.args.clear_mountpoint:
|
|
||||||
set_properties.append("canmount=noauto")
|
|
||||||
|
|
||||||
#sync datasets
|
|
||||||
fail_count=0
|
|
||||||
target_datasets=[]
|
|
||||||
for source_dataset in source_datasets:
|
|
||||||
|
|
||||||
try:
|
|
||||||
#determine corresponding target_dataset
|
|
||||||
target_name=self.args.target_path + "/" + source_dataset.lstrip_path(self.args.strip_path)
|
|
||||||
target_dataset=ZfsDataset(target_node, target_name)
|
|
||||||
target_datasets.append(target_dataset)
|
|
||||||
|
|
||||||
#ensure parents exists
|
|
||||||
#TODO: this isnt perfect yet, in some cases it can create parents when it shouldn't.
|
|
||||||
if not self.args.no_send and not target_dataset.parent in target_datasets and not target_dataset.parent.exists:
|
|
||||||
target_dataset.parent.create_filesystem(parents=True)
|
|
||||||
|
|
||||||
#determine common zpool features
|
|
||||||
source_features=source_node.get_zfs_pool(source_dataset.split_path()[0]).features
|
|
||||||
target_features=target_node.get_zfs_pool(target_dataset.split_path()[0]).features
|
|
||||||
common_features=source_features and target_features
|
|
||||||
# source_dataset.debug("Common features: {}".format(common_features))
|
|
||||||
|
|
||||||
source_dataset.sync_snapshots(target_dataset, show_progress=self.args.progress, features=common_features, filter_properties=filter_properties, set_properties=set_properties, ignore_recv_exit_code=self.args.ignore_transfer_errors, source_holds= not self.args.no_holds, rollback=self.args.rollback, raw=self.args.raw, other_snapshots=self.args.other_snapshots, no_send=self.args.no_send, destroy_incompatible=self.args.destroy_incompatible)
|
|
||||||
except Exception as e:
|
|
||||||
fail_count=fail_count+1
|
|
||||||
self.error("DATASET FAILED: "+str(e))
|
|
||||||
if self.args.debug:
|
|
||||||
raise
|
|
||||||
|
|
||||||
#also thin target_datasets that are not on the source any more
|
|
||||||
self.debug("Thinning obsolete datasets")
|
|
||||||
for dataset in ZfsDataset(target_node, self.args.target_path).recursive_datasets:
|
|
||||||
if dataset not in target_datasets:
|
|
||||||
dataset.debug("Missing on source")
|
|
||||||
dataset.thin()
|
|
||||||
|
|
||||||
|
|
||||||
if not fail_count:
|
if not fail_count:
|
||||||
if self.args.test:
|
if self.args.test:
|
||||||
self.set_title("All tests successfull.")
|
self.set_title("All tests successfull.")
|
||||||
else:
|
else:
|
||||||
self.set_title("All backups completed successfully")
|
self.set_title("All operations completed successfully")
|
||||||
|
if not self.args.target_path:
|
||||||
|
self.verbose("(No target_path specified, only operated as snapshot tool.)")
|
||||||
|
|
||||||
else:
|
else:
|
||||||
self.error("{} datasets failed!".format(fail_count))
|
if fail_count!=255:
|
||||||
|
self.error("{} failures!".format(fail_count))
|
||||||
|
|
||||||
|
|
||||||
if self.args.test:
|
if self.args.test:
|
||||||
self.verbose("TEST MODE - DID NOT MAKE ANY BACKUPS!")
|
self.verbose("")
|
||||||
|
self.verbose("TEST MODE - DID NOT MAKE ANY CHANGES!")
|
||||||
|
|
||||||
return(fail_count)
|
return(fail_count)
|
||||||
|
|
||||||
|
|||||||
135
test_destroymissing.py
Normal file
135
test_destroymissing.py
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
|
||||||
|
from basetest import *
|
||||||
|
|
||||||
|
|
||||||
|
class TestZfsNode(unittest2.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
prepare_zpools()
|
||||||
|
self.longMessage=True
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def test_destroymissing(self):
|
||||||
|
|
||||||
|
#initial backup
|
||||||
|
with patch('time.strftime', return_value="10101111000000"): #1000 years in past
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"): #far in past
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("Should do nothing yet"):
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
self.assertNotIn(": Destroy missing", buf.getvalue())
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("missing dataset of us that still has children"):
|
||||||
|
|
||||||
|
#just deselect it so it counts as 'missing'
|
||||||
|
shelltest("zfs set autobackup:test=child test_source1/fs1")
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf), redirect_stderr(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
#should have done the snapshot cleanup for destoy missing:
|
||||||
|
self.assertIn("fs1@test-10101111000000: Destroying", buf.getvalue())
|
||||||
|
|
||||||
|
self.assertIn("fs1: Destroy missing: Still has children here.", buf.getvalue())
|
||||||
|
|
||||||
|
shelltest("zfs inherit autobackup:test test_source1/fs1")
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("Normal destroyed leaf"):
|
||||||
|
shelltest("zfs destroy -r test_source1/fs1/sub")
|
||||||
|
|
||||||
|
#wait for deadline of last snapshot
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
#100y: lastest should not be old enough, while second to latest snapshot IS old enough:
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 100y".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
self.assertIn(": Waiting for deadline", buf.getvalue())
|
||||||
|
|
||||||
|
#past deadline, destroy
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 1y".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
self.assertIn("sub: Destroying", buf.getvalue())
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("Leaf with other snapshot still using it"):
|
||||||
|
shelltest("zfs destroy -r test_source1/fs1")
|
||||||
|
shelltest("zfs snapshot -r test_target1/test_source1/fs1@other1")
|
||||||
|
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
|
||||||
|
#cant finish because still in use:
|
||||||
|
self.assertIn("fs1: Destroy missing: Still in use", buf.getvalue())
|
||||||
|
|
||||||
|
shelltest("zfs destroy test_target1/test_source1/fs1@other1")
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("In use by clone"):
|
||||||
|
shelltest("zfs clone test_target1/test_source1/fs1@test-20101111000000 test_target1/clone1")
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf), redirect_stderr(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
#now tries to destroy our own last snapshot (before the final destroy of the dataset)
|
||||||
|
self.assertIn("fs1@test-20101111000000: Destroying", buf.getvalue())
|
||||||
|
#but cant finish because still in use:
|
||||||
|
self.assertIn("fs1: Error during destoy missing", buf.getvalue())
|
||||||
|
|
||||||
|
shelltest("zfs destroy test_target1/clone1")
|
||||||
|
|
||||||
|
|
||||||
|
with self.subTest("Should leave test_source1 parent"):
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf), redirect_stderr(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
#should have done the snapshot cleanup for destoy missing:
|
||||||
|
self.assertIn("fs1: Destroying", buf.getvalue())
|
||||||
|
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf), redirect_stderr(buf):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --destroy-missing 0s".split(" ")).run())
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
#on second run it sees the dangling ex-parent but doesnt know what to do with it (since it has no own snapshot)
|
||||||
|
self.assertIn("test_source1: Destroy missing: has no snapshots made by us.", buf.getvalue())
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#end result
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-10101111000000
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
20
test_regressions.py
Normal file
20
test_regressions.py
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
|
||||||
|
from basetest import *
|
||||||
|
|
||||||
|
|
||||||
|
class TestZfsNode(unittest2.TestCase):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
prepare_zpools()
|
||||||
|
self.longMessage=True
|
||||||
|
|
||||||
|
# #resume initial backup
|
||||||
|
# def test_keepsource0(self):
|
||||||
|
|
||||||
|
# #somehow only specifying --allow-empty --keep-source 0 failed:
|
||||||
|
# with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run())
|
||||||
|
|
||||||
|
# with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --keep-source 0".split(" ")).run())
|
||||||
|
|
||||||
@ -13,12 +13,57 @@ class TestZfsAutobackup(unittest2.TestCase):
|
|||||||
|
|
||||||
self.assertEqual(ZfsAutobackup("test test_target1 --keep-source -1".split(" ")).run(), 255)
|
self.assertEqual(ZfsAutobackup("test test_target1 --keep-source -1".split(" ")).run(), 255)
|
||||||
|
|
||||||
|
def test_snapshotmode(self):
|
||||||
|
"""test snapshot tool mode"""
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000002"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test --verbose --allow-empty --keep-source 0".split(" ")).run())
|
||||||
|
|
||||||
|
#on source: only has 1 and 2 (1 was hold)
|
||||||
|
#on target: has 0 and 1
|
||||||
|
#XXX:
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000001
|
||||||
|
test_source1/fs1@test-20101111000002
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_source1/fs1/sub@test-20101111000002
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000001
|
||||||
|
test_source2/fs2/sub@test-20101111000002
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def test_defaults(self):
|
def test_defaults(self):
|
||||||
|
|
||||||
with self.subTest("no datasets selected"):
|
with self.subTest("no datasets selected"):
|
||||||
#should resume and succeed
|
|
||||||
|
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stderr(buf):
|
with redirect_stderr(buf):
|
||||||
with patch('time.strftime', return_value="20101111000000"):
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
@ -129,8 +174,50 @@ test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
|||||||
test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
||||||
""")
|
""")
|
||||||
|
|
||||||
|
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
|
||||||
|
with self.subTest("test time checking"):
|
||||||
|
with patch('time.strftime', return_value="20111111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
time_str="20111112000000" #month in the "future"
|
||||||
|
future_timestamp=time_secs=time.mktime(time.strptime(time_str,"%Y%m%d%H%M%S"))
|
||||||
|
with patch('time.time', return_value=future_timestamp):
|
||||||
|
with patch('time.strftime', return_value="20111111000001"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20111111000000
|
||||||
|
test_source1/fs1@test-20111111000001
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20111111000000
|
||||||
|
test_source1/fs1/sub@test-20111111000001
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20111111000000
|
||||||
|
test_source2/fs2/sub@test-20111111000001
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20111111000000
|
||||||
|
test_target1/test_source1/fs1@test-20111111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20111111000000
|
||||||
|
test_target1/test_source1/fs1/sub@test-20111111000001
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20111111000000
|
||||||
|
test_target1/test_source2/fs2/sub@test-20111111000001
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
def test_ignore_othersnaphots(self):
|
def test_ignore_othersnaphots(self):
|
||||||
|
|
||||||
@ -711,6 +798,44 @@ test_target1/test_source2/fs2/sub@test-20101111000001
|
|||||||
""")
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_migrate(self):
|
||||||
|
"""test migration from other snapshotting systems. zfs-autobackup should be able to continue from any common snapshot, not just its own."""
|
||||||
|
|
||||||
|
shelltest("zfs snapshot test_source1/fs1@migrate1")
|
||||||
|
shelltest("zfs create test_target1/test_source1")
|
||||||
|
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
|
||||||
|
|
||||||
|
with patch('time.strftime', return_value="20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@migrate1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@migrate1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
###########################
|
###########################
|
||||||
# TODO:
|
# TODO:
|
||||||
|
|
||||||
@ -718,3 +843,5 @@ test_target1/test_source2/fs2/sub@test-20101111000001
|
|||||||
|
|
||||||
self.skipTest("todo: later when travis supports zfs 0.8")
|
self.skipTest("todo: later when travis supports zfs 0.8")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user