Compare commits

..

34 Commits

Author SHA1 Message Date
1730b860e3 bookmark support messy: not all options are supported by zfs, determining changed datasets also not possible. on hold for now 2019-02-18 16:56:46 +01:00
1b6cf6ccb9 bookmark support, work in progress. 2019-02-17 21:22:35 +01:00
291040eb2d obsolete, now that we have resume support in zfs 2019-02-16 22:28:15 +01:00
d12a132f3f added buffering 2019-02-16 21:51:30 +01:00
2255e0e691 add some performance options 2019-02-16 21:00:09 +01:00
6a481ed6a4 do not create snapshots for filesystems without changes. (usefull for proxmox cluster that use internal replication) 2019-02-16 20:30:44 +01:00
11d051122b now hold important snapshots to prevent accidental deletion by administrator or other scripts 2019-02-16 02:06:06 +01:00
511311eee7 nicer error output. you can also set autobackup:blah=child now, to only select childeren (recursively) of the dataset 2019-02-16 00:38:30 +01:00
fa405dce57 option to allow ignoring of transfer errors. this will still check if the filesystem was received. i used it to ignore a bunch of acltype property errors on smartos from proxmox. 2019-02-03 21:52:59 +01:00
bf37322aba Update README.md 2019-02-03 21:07:48 +01:00
a7bf1e8af8 version bump 2019-02-03 21:05:46 +01:00
352c61fd00 Merge branch 'master' of github.com:psy0rz/zfs_autobackup 2019-02-03 21:04:53 +01:00
0fe09ea535 use ~/.ssh/config for these options 2019-02-03 21:04:46 +01:00
64c9b84102 Update README.md 2019-02-03 21:00:39 +01:00
8a2e1d36d7 Update README.md 2019-02-03 20:59:16 +01:00
a120fbb85f Merge pull request #11 from psy0rz/revert-9-feature/ssh-mux
Revert "Feature/ssh mux"
2019-02-03 20:57:44 +01:00
42bbecc571 Revert "Feature/ssh mux" 2019-02-03 20:57:00 +01:00
b8d744869d doc 2019-02-03 20:56:45 +01:00
c253a17b75 info 2019-02-03 20:38:26 +01:00
d1fe00aee2 Merge branch 'master' of github.com:psy0rz/zfs_autobackup 2019-02-03 20:23:52 +01:00
85d2e1a635 Merge pull request #9 from mariusvw/feature/ssh-mux
Feature/ssh mux
2019-02-03 20:23:44 +01:00
9455918708 update docs 2018-06-07 14:08:59 +02:00
5316737388 update docs 2018-06-07 14:06:16 +02:00
c6afa33e62 option to filter properties on zfs recv 2018-06-07 14:01:24 +02:00
a8d0ff9f37 Restored stderr handing to previous state 2018-04-05 18:05:10 +02:00
cc1725e3be Compacted code bit into a function 2018-04-05 10:01:23 +02:00
42b71bbc74 Write STDERR to STDOUT 2018-04-05 10:00:53 +02:00
84d44a267a Exit persistent connection when everything is finished 2018-04-05 09:23:59 +02:00
ba89dc8bb2 Use persistent connections for 600 seconds (10min) 2018-04-05 09:23:39 +02:00
62178e424e Added argument to return exit code 2018-04-05 09:23:08 +02:00
b0ffdb4893 fix: atomic snapshots can only be created per pool. now uses a seperate zfs snapshot command for each pool 2018-03-06 15:54:00 +01:00
cc45122e3e Update zfs_autobackup
not needed module
2018-03-06 15:03:30 +01:00
e872d79677 conflicting name 2017-09-25 13:46:20 +02:00
e74e50d4e8 Update README.md 2017-09-20 10:28:34 +02:00
3 changed files with 986 additions and 1394 deletions

1
.gitignore vendored
View File

@ -1 +0,0 @@
.vscode/settings.json

265
README.md
View File

@ -1,18 +1,251 @@
# ZFS autobackup v3 - TEST VERSION # ZFS autobackup
Official releases are here: https://github.com/psy0rz/zfs_autobackup/releases Introduction
============
New in v3: ZFS autobackup is used to periodicly backup ZFS filesystems to other locations. This is done using the very effcient zfs send and receive commands.
* Complete rewrite, cleaner object oriented code.
* Python 3 and 2 support. It has the following features:
* Backwards compatible with your current backups and parameters. * Automaticly selects filesystems to backup by looking at a simple ZFS property. (recursive)
* Progressive thinning (via a destroy schedule. default schedule should be fine for most people) * Creates consistent snapshots. (takes all snapshots at once, atomic.)
* Cleaner output, with optional color support (pip install colorama). * Multiple backups modes:
* Clear distinction between local and remote output. * "push" local data to a backup-server via SSH.
* Summary at the beginning, displaying what will happen and the current thinning-schedule. * "pull" remote data from a server via SSH and backup it locally.
* More effient destroying/skipping snaphots on the fly. (no more space issues if your backup is way behind) * Backup local data on the same server.
* Progress indicator (--progress) * Can be scheduled via a simple cronjob or run directly from commandline.
* Better property management (--set-properties and --filter-properties) * Supports resuming of interrupted transfers. (via the zfs extensible_dataset feature)
* Better resume handling, automaticly abort invalid resumes. * Backups and snapshots can be named to prevent conflicts. (multiple backups from and to the same filesystems are no problem)
* More robust error handling. * Always creates a new snapshot before starting.
* Prepared for future enhanchements. * Checks everything and aborts on errors.
* Ability to 'finish' aborted backups to see what goes wrong.
* Easy to debug and has a test-mode. Actual unix commands are printed.
* Keeps latest X snapshots remote and locally. (default 30, configurable)
* Easy installation:
* Only one host needs the zfs_autobackup script. The other host just needs ssh and the zfs command.
* Written in python and uses zfs-commands, no 3rd party dependencys or libraries.
* No seperate config files or properties. Just one command you can copy/paste in your backup script.
Usage
====
```
usage: zfs_autobackup [-h] [--ssh-source SSH_SOURCE] [--ssh-target SSH_TARGET]
[--keep-source KEEP_SOURCE] [--keep-target KEEP_TARGET]
[--no-snapshot] [--no-send] [--resume]
[--strip-path STRIP_PATH] [--destroy-stale]
[--clear-refreservation] [--clear-mountpoint]
[--filter-properties FILTER_PROPERTIES] [--rollback]
[--test] [--verbose] [--debug]
backup_name target_fs
ZFS autobackup v2.2
positional arguments:
backup_name Name of the backup (you should set the zfs property
"autobackup:backup-name" to true on filesystems you
want to backup
target_fs Target filesystem
optional arguments:
-h, --help show this help message and exit
--ssh-source SSH_SOURCE
Source host to get backup from. (user@hostname)
Default local.
--ssh-target SSH_TARGET
Target host to push backup to. (user@hostname) Default
local.
--keep-source KEEP_SOURCE
Number of days to keep old snapshots on source.
Default 30.
--keep-target KEEP_TARGET
Number of days to keep old snapshots on target.
Default 30.
--no-snapshot dont create new snapshot (usefull for finishing
uncompleted backups, or cleanups)
--no-send dont send snapshots (usefull to only do a cleanup)
--resume support resuming of interrupted transfers by using the
zfs extensible_dataset feature (both zpools should
have it enabled) Disadvantage is that you need to use
zfs recv -A if another snapshot is created on the
target during a receive. Otherwise it will keep
failing.
--strip-path STRIP_PATH
number of directory to strip from path (use 1 when
cloning zones between 2 SmartOS machines)
--destroy-stale Destroy stale backups that have no more snapshots. Be
sure to verify the output before using this!
--clear-refreservation
Set refreservation property to none for new
filesystems. Usefull when backupping SmartOS volumes.
(recommended)
--clear-mountpoint Sets canmount=noauto property, to prevent the received
filesystem from mounting over existing filesystems.
(recommended)
--filter-properties FILTER_PROPERTIES
Filter properties when receiving filesystems. Can be
specified multiple times. (Example: If you send data
from Linux to FreeNAS, you should filter xattr)
--rollback Rollback changes on the target before starting a
backup. (normally you can prevent changes by setting
the readonly property on the target_fs to on)
--test dont change anything, just show what would be done
(still does all read-only operations)
--verbose verbose output
--debug debug output (shows commands that are executed)
```
Backup example
==============
In this example we're going to backup a SmartOS machine called `smartos01` to our fileserver called `fs1`.
Its important to choose a unique and consistent backup name. In this case we name our backup: `smartos01_fs1`.
Select filesystems to backup
----------------------------
On the source zfs system set the ```autobackup:smartos01_fs1``` zfs property to true:
```
[root@smartos01 ~]# zfs set autobackup:smartos01_fs1=true zones
[root@smartos01 ~]# zfs get -t filesystem autobackup:smartos01_fs1
NAME PROPERTY VALUE SOURCE
zones autobackup:smartos01_fs1 true local
zones/1eb33958-72c1-11e4-af42-ff0790f603dd autobackup:smartos01_fs1 true inherited from zones
zones/3c71a6cd-6857-407c-880c-09225ce4208e autobackup:smartos01_fs1 true inherited from zones
zones/3c905e49-81c0-4a5a-91c3-fc7996f97d47 autobackup:smartos01_fs1 true inherited from zones
...
```
Because we dont want to backup everything, we can exclude certain filesystem by setting the property to false:
```
[root@smartos01 ~]# zfs set autobackup:smartos01_fs1=false zones/backup
[root@smartos01 ~]# zfs get -t filesystem autobackup:smartos01_fs1
NAME PROPERTY VALUE SOURCE
zones autobackup:smartos01_fs1 true local
zones/1eb33958-72c1-11e4-af42-ff0790f603dd autobackup:smartos01_fs1 true inherited from zones
...
zones/backup autobackup:smartos01_fs1 false local
zones/backup/fs1 autobackup:smartos01_fs1 false inherited from zones/backup
...
```
Running zfs_autobackup
----------------------
There are 2 ways to run the backup, but the endresult is always the same. Its just a matter of security (trust relations between the servers) and preference.
First install the ssh-key on the server that you specify with --ssh-source or --ssh-target.
Method 1: Run the script on the backup server and pull the data from the server specfied by --ssh-source. This is usually the preferred way and prevents a hacked server from accesing the backup-data:
```
root@fs1:/home/psy# ./zfs_autobackup --ssh-source root@1.2.3.4 smartos01_fs1 fs1/zones/backup/zfsbackups/smartos01.server.com --verbose
Getting selected source filesystems for backup smartos01_fs1 on root@1.2.3.4
Selected: zones (direct selection)
Selected: zones/1eb33958-72c1-11e4-af42-ff0790f603dd (inherited selection)
Selected: zones/325dbc5e-2b90-11e3-8a3e-bfdcb1582a8d (inherited selection)
...
Ignoring: zones/backup (disabled)
Ignoring: zones/backup/fs1 (disabled)
...
Creating source snapshot smartos01_fs1-20151030203738 on root@1.2.3.4
Getting source snapshot-list from root@1.2.3.4
Getting target snapshot-list from local
Tranferring zones incremental backup between snapshots smartos01_fs1-20151030175345...smartos01_fs1-20151030203738
...
received 1.09MB stream in 1 seconds (1.09MB/sec)
Destroying old snapshots on source
Destroying old snapshots on target
All done
```
Method 2: Run the script on the server and push the data to the backup server specified by --ssh-target:
```
./zfs_autobackup --ssh-target root@2.2.2.2 smartos01_fs1 fs1/zones/backup/zfsbackups/smartos01.server.com --verbose --compress
...
All done
```
Tips
====
* Set the ```readonly``` property of the target filesystem to ```on```. This prevents changes on the target side. If there are changes the next backup will fail and will require a zfs rollback. (by using the --rollback option for example)
* Use ```--clear-refreservation``` to save space on your backup server.
* Use ```--clear-mountpoint``` to prevent the target server from mounting the backupped filesystem in the wrong place during a reboot. If this happens on systems like SmartOS or Openindia, svc://filesystem/local wont be able to mount some stuff and you need to resolve these issues on the console.
Speeding up SSH and prevent connection flooding
-----------------------------------------------
Add this to your ~/.ssh/config:
```
Host *
ControlPath ~/.ssh/control-master-%r@%h:%p
ControlMaster auto
ControlPersist 3600
```
This will make all your ssh connections persistent and greatly speed up zfs_autobackup for jobs with short intervals.
Thanks @mariusvw :)
Specifying ssh port or options
------------------------------
The correct way to do this is by creating ~/.ssh/config:
```
Host smartos04
Hostname 1.2.3.4
Port 1234
user root
Compression yes
```
This way you can just specify smartos04
Also uses compression on slow links.
Look in man ssh_config for many more options.
Troubleshooting
===============
`cannot receive incremental stream: invalid backup stream`
This usually means you've created a new snapshot on the target side during a backup.
* Solution 1: Restart zfs_autobackup and make sure you dont use --resume. If you did use --resume, be sure to "abort" the recveive on the target side with zfs recv -A.
* Solution 2: Destroy the newly created snapshot and restart zfs_autobackup.
`internal error: Invalid argument`
In some cases (Linux -> FreeBSD) this means certain properties are not fully supported on the target system.
Try using something like: --filter-properties xattr
Restore example
===============
Restoring can be done with simple zfs commands. For example, use this to restore a specific SmartOS disk image to a temporary restore location:
```
root@fs1:/home/psy# zfs send fs1/zones/backup/zfsbackups/smartos01.server.com/zones/a3abd6c8-24c6-4125-9e35-192e2eca5908-disk0@smartos01_fs1-20160110000003 | ssh root@2.2.2.2 "zfs recv zones/restore"
```
After that you can rename the disk image from the temporary location to the location of a new SmartOS machine you've created.
Monitoring with Zabbix-jobs
===========================
You can monitor backups by using my zabbix-jobs script. (https://github.com/psy0rz/stuff/tree/master/zabbix-jobs)
Put this command directly after the zfs_backup command in your cronjob:
```
zabbix-job-status backup_smartos01_fs1 daily $?
```
This will update the zabbix server with the exitcode and will also alert you if the job didnt run for more than 2 days.

File diff suppressed because it is too large Load Diff