Compare commits
34 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 1730b860e3 | |||
| 1b6cf6ccb9 | |||
| 291040eb2d | |||
| d12a132f3f | |||
| 2255e0e691 | |||
| 6a481ed6a4 | |||
| 11d051122b | |||
| 511311eee7 | |||
| fa405dce57 | |||
| bf37322aba | |||
| a7bf1e8af8 | |||
| 352c61fd00 | |||
| 0fe09ea535 | |||
| 64c9b84102 | |||
| 8a2e1d36d7 | |||
| a120fbb85f | |||
| 42bbecc571 | |||
| b8d744869d | |||
| c253a17b75 | |||
| d1fe00aee2 | |||
| 85d2e1a635 | |||
| 9455918708 | |||
| 5316737388 | |||
| c6afa33e62 | |||
| a8d0ff9f37 | |||
| cc1725e3be | |||
| 42b71bbc74 | |||
| 84d44a267a | |||
| ba89dc8bb2 | |||
| 62178e424e | |||
| b0ffdb4893 | |||
| cc45122e3e | |||
| e872d79677 | |||
| e74e50d4e8 |
94
README.md
94
README.md
@ -6,8 +6,8 @@ Introduction
|
||||
ZFS autobackup is used to periodicly backup ZFS filesystems to other locations. This is done using the very effcient zfs send and receive commands.
|
||||
|
||||
It has the following features:
|
||||
* Automaticly selects filesystems to backup by looking at a simple ZFS property.
|
||||
* Creates consistent snapshots.
|
||||
* Automaticly selects filesystems to backup by looking at a simple ZFS property. (recursive)
|
||||
* Creates consistent snapshots. (takes all snapshots at once, atomic.)
|
||||
* Multiple backups modes:
|
||||
* "push" local data to a backup-server via SSH.
|
||||
* "pull" remote data from a server via SSH and backup it locally.
|
||||
@ -23,19 +23,21 @@ It has the following features:
|
||||
* Easy installation:
|
||||
* Only one host needs the zfs_autobackup script. The other host just needs ssh and the zfs command.
|
||||
* Written in python and uses zfs-commands, no 3rd party dependencys or libraries.
|
||||
* No seperate config files or properties. Just one command you can copy/paste in your backup script.
|
||||
|
||||
Usage
|
||||
====
|
||||
```
|
||||
usage: zfs_autobackup [-h] [--ssh-source SSH_SOURCE] [--ssh-target SSH_TARGET]
|
||||
[--ssh-cipher SSH_CIPHER] [--keep-source KEEP_SOURCE]
|
||||
[--keep-target KEEP_TARGET] [--no-snapshot] [--no-send]
|
||||
[--resume] [--strip-path STRIP_PATH] [--destroy-stale]
|
||||
[--keep-source KEEP_SOURCE] [--keep-target KEEP_TARGET]
|
||||
[--no-snapshot] [--no-send] [--resume]
|
||||
[--strip-path STRIP_PATH] [--destroy-stale]
|
||||
[--clear-refreservation] [--clear-mountpoint]
|
||||
[--rollback] [--compress] [--test] [--verbose] [--debug]
|
||||
[--filter-properties FILTER_PROPERTIES] [--rollback]
|
||||
[--test] [--verbose] [--debug]
|
||||
backup_name target_fs
|
||||
|
||||
ZFS autobackup v2.1
|
||||
ZFS autobackup v2.2
|
||||
|
||||
positional arguments:
|
||||
backup_name Name of the backup (you should set the zfs property
|
||||
@ -51,8 +53,6 @@ optional arguments:
|
||||
--ssh-target SSH_TARGET
|
||||
Target host to push backup to. (user@hostname) Default
|
||||
local.
|
||||
--ssh-cipher SSH_CIPHER
|
||||
SSH cipher to use (default None)
|
||||
--keep-source KEEP_SOURCE
|
||||
Number of days to keep old snapshots on source.
|
||||
Default 30.
|
||||
@ -64,7 +64,10 @@ optional arguments:
|
||||
--no-send dont send snapshots (usefull to only do a cleanup)
|
||||
--resume support resuming of interrupted transfers by using the
|
||||
zfs extensible_dataset feature (both zpools should
|
||||
have it enabled)
|
||||
have it enabled) Disadvantage is that you need to use
|
||||
zfs recv -A if another snapshot is created on the
|
||||
target during a receive. Otherwise it will keep
|
||||
failing.
|
||||
--strip-path STRIP_PATH
|
||||
number of directory to strip from path (use 1 when
|
||||
cloning zones between 2 SmartOS machines)
|
||||
@ -77,10 +80,13 @@ optional arguments:
|
||||
--clear-mountpoint Sets canmount=noauto property, to prevent the received
|
||||
filesystem from mounting over existing filesystems.
|
||||
(recommended)
|
||||
--filter-properties FILTER_PROPERTIES
|
||||
Filter properties when receiving filesystems. Can be
|
||||
specified multiple times. (Example: If you send data
|
||||
from Linux to FreeNAS, you should filter xattr)
|
||||
--rollback Rollback changes on the target before starting a
|
||||
backup. (normally you can prevent changes by setting
|
||||
the readonly property on the target_fs to on)
|
||||
--compress use compression during zfs send/recv
|
||||
--test dont change anything, just show what would be done
|
||||
(still does all read-only operations)
|
||||
--verbose verbose output
|
||||
@ -131,7 +137,7 @@ First install the ssh-key on the server that you specify with --ssh-source or --
|
||||
|
||||
Method 1: Run the script on the backup server and pull the data from the server specfied by --ssh-source. This is usually the preferred way and prevents a hacked server from accesing the backup-data:
|
||||
```
|
||||
root@fs1:/home/psy# ./zfs_autobackup --ssh-source root@1.2.3.4 smartos01_fs1 fs1/zones/backup/zfsbackups/smartos01.server.com --verbose --compress
|
||||
root@fs1:/home/psy# ./zfs_autobackup --ssh-source root@1.2.3.4 smartos01_fs1 fs1/zones/backup/zfsbackups/smartos01.server.com --verbose
|
||||
Getting selected source filesystems for backup smartos01_fs1 on root@1.2.3.4
|
||||
Selected: zones (direct selection)
|
||||
Selected: zones/1eb33958-72c1-11e4-af42-ff0790f603dd (inherited selection)
|
||||
@ -160,12 +166,63 @@ All done
|
||||
```
|
||||
|
||||
Tips
|
||||
----
|
||||
====
|
||||
|
||||
* Set the ```readonly``` property of the target filesystem to ```on```. This prevents changes on the target side. If there are changes the next backup will fail and will require a zfs rollback. (by using the --rollback option for example)
|
||||
* Use ```--clear-refreservation``` to save space on your backup server.
|
||||
* Use ```--clear-mountpoint``` to prevent the target server from mounting the backupped filesystem in the wrong place during a reboot. If this happens on systems like SmartOS or Openindia, svc://filesystem/local wont be able to mount some stuff and you need to resolve these issues on the console.
|
||||
|
||||
Speeding up SSH and prevent connection flooding
|
||||
-----------------------------------------------
|
||||
|
||||
Add this to your ~/.ssh/config:
|
||||
```
|
||||
Host *
|
||||
ControlPath ~/.ssh/control-master-%r@%h:%p
|
||||
ControlMaster auto
|
||||
ControlPersist 3600
|
||||
```
|
||||
|
||||
This will make all your ssh connections persistent and greatly speed up zfs_autobackup for jobs with short intervals.
|
||||
|
||||
Thanks @mariusvw :)
|
||||
|
||||
|
||||
Specifying ssh port or options
|
||||
------------------------------
|
||||
|
||||
The correct way to do this is by creating ~/.ssh/config:
|
||||
```
|
||||
Host smartos04
|
||||
Hostname 1.2.3.4
|
||||
Port 1234
|
||||
user root
|
||||
Compression yes
|
||||
```
|
||||
|
||||
This way you can just specify smartos04
|
||||
|
||||
Also uses compression on slow links.
|
||||
|
||||
Look in man ssh_config for many more options.
|
||||
|
||||
Troubleshooting
|
||||
===============
|
||||
|
||||
`cannot receive incremental stream: invalid backup stream`
|
||||
|
||||
This usually means you've created a new snapshot on the target side during a backup.
|
||||
* Solution 1: Restart zfs_autobackup and make sure you dont use --resume. If you did use --resume, be sure to "abort" the recveive on the target side with zfs recv -A.
|
||||
* Solution 2: Destroy the newly created snapshot and restart zfs_autobackup.
|
||||
|
||||
|
||||
`internal error: Invalid argument`
|
||||
|
||||
In some cases (Linux -> FreeBSD) this means certain properties are not fully supported on the target system.
|
||||
|
||||
Try using something like: --filter-properties xattr
|
||||
|
||||
|
||||
Restore example
|
||||
===============
|
||||
|
||||
@ -178,17 +235,6 @@ root@fs1:/home/psy# zfs send fs1/zones/backup/zfsbackups/smartos01.server.com/z
|
||||
|
||||
After that you can rename the disk image from the temporary location to the location of a new SmartOS machine you've created.
|
||||
|
||||
Snapshotting example
|
||||
====================
|
||||
|
||||
Sending huge snapshots cant be resumed when a connection is interrupted: Next time zfs_autobackup is started, the whole snapshot will be transferred again. For this reason you might want to have multiple small snapshots.
|
||||
|
||||
The --no-send option can be usefull for this. This way you can already create small snapshots every few hours:
|
||||
````
|
||||
[root@smartos2 ~]# zfs_autobackup --ssh-source root@smartos1 smartos1_freenas1 zones --verbose --ssh-cipher chacha20-poly1305@openssh.com --no-send
|
||||
````
|
||||
|
||||
Later when our freenas1 server is ready we can use the same command without the --no-send at freenas1. At that point the server will receive all the small snapshots up to that point.
|
||||
|
||||
|
||||
|
||||
|
||||
364
zfs_autobackup
364
zfs_autobackup
@ -1,9 +1,5 @@
|
||||
#!/usr/bin/env python
|
||||
#!/usr/bin/env python2
|
||||
# -*- coding: utf8 -*-
|
||||
|
||||
|
||||
|
||||
|
||||
from __future__ import print_function
|
||||
import os
|
||||
import sys
|
||||
@ -11,9 +7,8 @@ import re
|
||||
import traceback
|
||||
import subprocess
|
||||
import pprint
|
||||
import cStringIO
|
||||
import time
|
||||
|
||||
import shlex
|
||||
|
||||
def error(txt):
|
||||
print(txt, file=sys.stderr)
|
||||
@ -40,10 +35,6 @@ def run(cmd, input=None, ssh_to="local", tab_split=False, valid_exitcodes=[ 0 ],
|
||||
#use ssh?
|
||||
if ssh_to != "local":
|
||||
encoded_cmd.extend(["ssh", ssh_to])
|
||||
if args.ssh_cipher:
|
||||
encoded_cmd.extend(["-c", args.ssh_cipher])
|
||||
if args.compress:
|
||||
encoded_cmd.append("-C")
|
||||
|
||||
|
||||
#make sure the command gets all the data in utf8 format:
|
||||
@ -104,22 +95,24 @@ def zfs_get_selected_filesystems(ssh_to, backup_name):
|
||||
for source_filesystem in source_filesystems:
|
||||
(name,value,source)=source_filesystem
|
||||
if value=="false":
|
||||
verbose("Ignoring: {0} (disabled)".format(name))
|
||||
verbose("Ignored : {0} (disabled)".format(name))
|
||||
|
||||
else:
|
||||
if source=="local":
|
||||
selected_filesystems.append(name)
|
||||
if source=="local" and ( value=="true" or value=="child"):
|
||||
direct_filesystems.append(name)
|
||||
|
||||
if source=="local" and value=="true":
|
||||
selected_filesystems.append(name)
|
||||
verbose("Selected: {0} (direct selection)".format(name))
|
||||
elif source.find("inherited from ")==0:
|
||||
elif source.find("inherited from ")==0 and (value=="true" or value=="child"):
|
||||
inherited_from=re.sub("^inherited from ", "", source)
|
||||
if inherited_from in direct_filesystems:
|
||||
selected_filesystems.append(name)
|
||||
verbose("Selected: {0} (inherited selection)".format(name))
|
||||
else:
|
||||
verbose("Ignored: {0} (already a backup)".format(name))
|
||||
verbose("Ignored : {0} (already a backup)".format(name))
|
||||
else:
|
||||
vebose("Ignored: {0} ({0})".format(source))
|
||||
verbose("Ignored : {0} (only childs)".format(name))
|
||||
|
||||
return(selected_filesystems)
|
||||
|
||||
@ -150,6 +143,10 @@ def zfs_destroy_snapshots(ssh_to, snapshots):
|
||||
[ "xargs", "-0", "-n", "1", "zfs", "destroy", "-d" ]
|
||||
)
|
||||
|
||||
def zfs_destroy_bookmark(ssh_to, bookmark):
|
||||
|
||||
run(ssh_to=ssh_to, test=args.test, valid_exitcodes=[ 0,1 ], cmd=[ "zfs", "destroy", bookmark ])
|
||||
|
||||
"""destroy list of filesystems """
|
||||
def zfs_destroy(ssh_to, filesystems, recursive=False):
|
||||
|
||||
@ -166,37 +163,51 @@ test_snapshots={}
|
||||
|
||||
|
||||
|
||||
"""create snapshot on multiple filesystems at once (atomicly)"""
|
||||
"""create snapshot on multiple filesystems at once (atomicly per pool)"""
|
||||
def zfs_create_snapshot(ssh_to, filesystems, snapshot):
|
||||
|
||||
cmd=[ "zfs", "snapshot" ]
|
||||
|
||||
#collect per pool, zfs can only take atomic snapshots per pool
|
||||
pools={}
|
||||
for filesystem in filesystems:
|
||||
cmd.append(filesystem+"@"+snapshot)
|
||||
pool=filesystem.split('/')[0]
|
||||
if pool not in pools:
|
||||
pools[pool]=[]
|
||||
pools[pool].append(filesystem)
|
||||
|
||||
#in testmode we dont actually make changes, so keep them in a list to simulate
|
||||
if args.test:
|
||||
if not ssh_to in test_snapshots:
|
||||
test_snapshots[ssh_to]={}
|
||||
if not filesystem in test_snapshots[ssh_to]:
|
||||
test_snapshots[ssh_to][filesystem]=[]
|
||||
test_snapshots[ssh_to][filesystem].append(snapshot)
|
||||
for pool in pools:
|
||||
cmd=[ "zfs", "snapshot" ]
|
||||
for filesystem in pools[pool]:
|
||||
cmd.append(filesystem+snapshot)
|
||||
|
||||
run(ssh_to=ssh_to, tab_split=False, cmd=cmd, test=args.test)
|
||||
# #in testmode we dont actually make changes, so keep them in a list to simulate
|
||||
# if args.test:
|
||||
# if not ssh_to in test_snapshots:
|
||||
# test_snapshots[ssh_to]={}
|
||||
# if not filesystem in test_snapshots[ssh_to]:
|
||||
# test_snapshots[ssh_to][filesystem]=[]
|
||||
# test_snapshots[ssh_to][filesystem].append(snapshot)
|
||||
|
||||
run(ssh_to=ssh_to, tab_split=False, cmd=cmd, test=args.test)
|
||||
|
||||
|
||||
"""get names of all snapshots for specified filesystems belonging to backup_name
|
||||
|
||||
return[filesystem_name]=[ "snashot1", "snapshot2", ... ]
|
||||
"""
|
||||
def zfs_get_snapshots(ssh_to, filesystems, backup_name):
|
||||
def zfs_get_snapshots(ssh_to, filesystems, backup_name, also_bookmarks=False):
|
||||
|
||||
ret={}
|
||||
|
||||
if filesystems:
|
||||
if also_bookmarks:
|
||||
fstype="snapshot,bookmark"
|
||||
else:
|
||||
fstype="snapshot"
|
||||
|
||||
#TODO: get rid of ugly errors for non-existing target filesystems
|
||||
cmd=[
|
||||
"zfs", "list", "-d", "1", "-r", "-t" ,"snapshot", "-H", "-o", "name"
|
||||
"zfs", "list", "-d", "1", "-r", "-t" ,fstype, "-H", "-o", "name", "-s", "createtxg"
|
||||
]
|
||||
cmd.extend(filesystems)
|
||||
|
||||
@ -204,30 +215,92 @@ def zfs_get_snapshots(ssh_to, filesystems, backup_name):
|
||||
|
||||
|
||||
for snapshot in snapshots:
|
||||
(filesystem, snapshot_name)=snapshot.split("@")
|
||||
if re.match("^"+backup_name+"-[0-9]*$", snapshot_name):
|
||||
if not filesystem in ret:
|
||||
ret[filesystem]=[]
|
||||
ret[filesystem].append(snapshot_name)
|
||||
if "@" in snapshot:
|
||||
(filesystem, snapshot_name)=snapshot.split("@")
|
||||
snapshot_name="@"+snapshot_name
|
||||
else:
|
||||
(filesystem, snapshot_name)=snapshot.split("#")
|
||||
snapshot_name="#"+snapshot_name
|
||||
|
||||
if re.match("^[@#]"+backup_name+"-[0-9]*$", snapshot_name):
|
||||
ret.setdefault(filesystem,[]).append(snapshot_name)
|
||||
|
||||
#TODO: get rid of this or make a more generic caching/testing system. (is it still needed, since the allow_empty-function?)
|
||||
#also add any test-snapshots that where created with --test mode
|
||||
if args.test:
|
||||
if ssh_to in test_snapshots:
|
||||
for filesystem in filesystems:
|
||||
if filesystem in test_snapshots[ssh_to]:
|
||||
if not filesystem in ret:
|
||||
ret[filesystem]=[]
|
||||
ret[filesystem].extend(test_snapshots[ssh_to][filesystem])
|
||||
# if args.test:
|
||||
# if ssh_to in test_snapshots:
|
||||
# for filesystem in filesystems:
|
||||
# if filesystem in test_snapshots[ssh_to]:
|
||||
# if not filesystem in ret:
|
||||
# ret[filesystem]=[]
|
||||
# ret[filesystem].extend(test_snapshots[ssh_to][filesystem])
|
||||
|
||||
return(ret)
|
||||
|
||||
|
||||
# """get names of all bookmarks for specified filesystems belonging to backup_name
|
||||
#
|
||||
# return[filesystem_name]=[ "bookmark1", "bookmark2", ... ]
|
||||
# """
|
||||
# def zfs_get_bookmarks(ssh_to, filesystems, backup_name):
|
||||
#
|
||||
# ret={}
|
||||
#
|
||||
# if filesystems:
|
||||
# #TODO: get rid of ugly errors for non-existing target filesystems
|
||||
# cmd=[
|
||||
# "zfs", "list", "-d", "1", "-r", "-t" ,"bookmark", "-H", "-o", "name", "-s", "createtxg"
|
||||
# ]
|
||||
# cmd.extend(filesystems)
|
||||
#
|
||||
# bookmarks=run(ssh_to=ssh_to, tab_split=False, cmd=cmd, valid_exitcodes=[ 0 ])
|
||||
#
|
||||
# for bookmark in bookmarks:
|
||||
# (filesystem, bookmark_name)=bookmark.split("#")
|
||||
# if re.match("^"+backup_name+"-[0-9]*$", bookmark_name):
|
||||
# ret.setdefault(filesystem,[]).append(bookmark_name)
|
||||
#
|
||||
# return(ret)
|
||||
|
||||
|
||||
def default_tag():
|
||||
return("zfs_autobackup:"+args.backup_name)
|
||||
|
||||
"""hold a snapshot so it cant be destroyed accidently by admin or other processes"""
|
||||
def zfs_hold_snapshot(ssh_to, snapshot, tag=None):
|
||||
cmd=[
|
||||
"zfs", "hold", tag or default_tag(), snapshot
|
||||
]
|
||||
|
||||
run(ssh_to=ssh_to, test=args.test, tab_split=False, cmd=cmd, valid_exitcodes=[ 0, 1 ])
|
||||
|
||||
|
||||
"""release a snapshot"""
|
||||
def zfs_release_snapshot(ssh_to, snapshot, tag=None):
|
||||
cmd=[
|
||||
"zfs", "release", tag or default_tag(), snapshot
|
||||
]
|
||||
|
||||
run(ssh_to=ssh_to, test=args.test, tab_split=False, cmd=cmd, valid_exitcodes=[ 0, 1 ])
|
||||
|
||||
|
||||
"""bookmark a snapshot"""
|
||||
def zfs_bookmark_snapshot(ssh_to, snapshot):
|
||||
(filesystem, snapshot_name)=snapshot.split("@")
|
||||
cmd=[
|
||||
"zfs", "bookmark", snapshot, '#'+snapshot_name
|
||||
]
|
||||
|
||||
run(ssh_to=ssh_to, test=args.test, tab_split=False, cmd=cmd, valid_exitcodes=[ 0 ])
|
||||
|
||||
|
||||
"""transfer a zfs snapshot from source to target. both can be either local or via ssh.
|
||||
|
||||
|
||||
TODO:
|
||||
|
||||
(parially implemented, local buffer is a bit more annoying to do)
|
||||
|
||||
buffering: specify buffer_size to use mbuffer (or alike) to apply buffering where neccesary
|
||||
|
||||
local to local:
|
||||
@ -240,7 +313,6 @@ remote send -> remote buffer -> ssh -> local buffer -> local receive
|
||||
remote to remote:
|
||||
remote send -> remote buffer -> ssh -> local buffer -> ssh -> remote buffer -> remote receive
|
||||
|
||||
TODO: can we string together all the zfs sends and recvs, so that we only need to use 1 ssh connection? should be faster if there are many small snaphots
|
||||
|
||||
|
||||
|
||||
@ -253,15 +325,20 @@ def zfs_transfer(ssh_source, source_filesystem, first_snapshot, second_snapshot,
|
||||
|
||||
if ssh_source != "local":
|
||||
source_cmd.extend([ "ssh", ssh_source ])
|
||||
if args.ssh_cipher:
|
||||
source_cmd.extend(["-c", args.ssh_cipher])
|
||||
if args.compress:
|
||||
source_cmd.append("-C")
|
||||
|
||||
source_cmd.extend(["zfs", "send", ])
|
||||
|
||||
#all kind of performance options:
|
||||
source_cmd.append("-L") # large block support
|
||||
source_cmd.append("-e") # WRITE_EMBEDDED, more compact stream
|
||||
source_cmd.append("-c") # use compressed WRITE records
|
||||
if not args.resume:
|
||||
source_cmd.append("-D") # dedupped stream, sends less duplicate data
|
||||
|
||||
|
||||
|
||||
#only verbose in debug mode, lots of output
|
||||
if args.debug:
|
||||
if args.debug :
|
||||
source_cmd.append("-v")
|
||||
|
||||
|
||||
@ -275,31 +352,41 @@ def zfs_transfer(ssh_source, source_filesystem, first_snapshot, second_snapshot,
|
||||
verbose("RESUMING "+txt)
|
||||
|
||||
else:
|
||||
source_cmd.append("-p")
|
||||
# source_cmd.append("-p")
|
||||
|
||||
if first_snapshot:
|
||||
source_cmd.extend([ "-i", first_snapshot ])
|
||||
source_cmd.append( "-i")
|
||||
#TODO: fix these horrible escaping hacks
|
||||
if ssh_source != "local":
|
||||
source_cmd.append( "'"+first_snapshot+"'" )
|
||||
else:
|
||||
source_cmd.append( first_snapshot )
|
||||
|
||||
if ssh_source != "local":
|
||||
source_cmd.append("'" + source_filesystem + "@" + second_snapshot + "'")
|
||||
source_cmd.append("'" + source_filesystem + second_snapshot + "'")
|
||||
else:
|
||||
source_cmd.append(source_filesystem + "@" + second_snapshot)
|
||||
source_cmd.append(source_filesystem + second_snapshot)
|
||||
|
||||
verbose(txt)
|
||||
|
||||
if args.buffer and args.ssh_source!="local":
|
||||
source_cmd.append("|mbuffer -m {}".format(args.buffer))
|
||||
|
||||
|
||||
#### build target command
|
||||
target_cmd=[]
|
||||
|
||||
if ssh_target != "local":
|
||||
target_cmd.extend([ "ssh", ssh_target ])
|
||||
if args.ssh_cipher:
|
||||
target_cmd.extend(["-c", args.ssh_cipher])
|
||||
if args.compress:
|
||||
target_cmd.append("-C")
|
||||
|
||||
target_cmd.extend(["zfs", "recv", "-u" ])
|
||||
|
||||
#also verbose in --verbose mode so we can see the transfer speed when its completed
|
||||
# filter certain properties on receive (usefull for linux->freebsd in some cases)
|
||||
if args.filter_properties:
|
||||
for filter_property in args.filter_properties:
|
||||
target_cmd.extend([ "-x" , filter_property ])
|
||||
|
||||
#also verbose in --verbose moqde so we can see the transfer speed when its completed
|
||||
if args.verbose or args.debug:
|
||||
target_cmd.append("-v")
|
||||
|
||||
@ -312,6 +399,8 @@ def zfs_transfer(ssh_source, source_filesystem, first_snapshot, second_snapshot,
|
||||
else:
|
||||
target_cmd.append(target_filesystem)
|
||||
|
||||
if args.buffer and args.ssh_target!="local":
|
||||
target_cmd.append("|mbuffer -m {}".format(args.buffer))
|
||||
|
||||
|
||||
#### make sure parent on target exists
|
||||
@ -332,15 +421,16 @@ def zfs_transfer(ssh_source, source_filesystem, first_snapshot, second_snapshot,
|
||||
source_proc.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
|
||||
target_proc.communicate()
|
||||
|
||||
if source_proc.returncode:
|
||||
raise(subprocess.CalledProcessError(source_proc.returncode, source_cmd))
|
||||
if not args.ignore_transfer_errors:
|
||||
if source_proc.returncode:
|
||||
raise(subprocess.CalledProcessError(source_proc.returncode, source_cmd))
|
||||
|
||||
#zfs recv sometimes gives an exitcode 1 while the transfer was succesfull, therefore we ignore exit 1's and do an extra check to see if the snapshot is there.
|
||||
if target_proc.returncode and target_proc.returncode!=1:
|
||||
raise(subprocess.CalledProcessError(target_proc.returncode, target_cmd))
|
||||
#zfs recv sometimes gives an exitcode 1 while the transfer was succesfull, therefore we ignore exit 1's and do an extra check to see if the snapshot is there.
|
||||
if target_proc.returncode and target_proc.returncode!=1:
|
||||
raise(subprocess.CalledProcessError(target_proc.returncode, target_cmd))
|
||||
|
||||
debug("Verifying if snapshot exists on target")
|
||||
run(ssh_to=ssh_target, cmd=["zfs", "list", target_filesystem+"@"+second_snapshot ])
|
||||
run(ssh_to=ssh_target, cmd=["zfs", "list", target_filesystem+second_snapshot ])
|
||||
|
||||
|
||||
|
||||
@ -387,8 +477,8 @@ def determine_destroy_list(snapshots, days):
|
||||
else:
|
||||
time_secs=int(time_str)
|
||||
# verbose("time_secs"+time_str)
|
||||
if (now-time_secs) > (24 * 3600 * days):
|
||||
ret.append(filesystem+"@"+snapshot)
|
||||
if (now-time_secs) >= (24 * 3600 * days):
|
||||
ret.append(filesystem+snapshot)
|
||||
|
||||
return(ret)
|
||||
|
||||
@ -397,6 +487,28 @@ def lstrip_path(path, count):
|
||||
return("/".join(path.split("/")[count:]))
|
||||
|
||||
|
||||
"""get list of filesystems that are changed, compared to the latest snapshot"""
|
||||
def zfs_get_unchanged_filesystems(ssh_to, filesystems):
|
||||
|
||||
ret=[]
|
||||
for ( filesystem, snapshot_list ) in filesystems.items():
|
||||
latest_snapshot=snapshot_list[-1]
|
||||
|
||||
#make sure its a snapshot and not a bookmark
|
||||
latest_snapshot="@"+latest_snapshot[:1]
|
||||
|
||||
cmd=[
|
||||
"zfs", "get","-H" ,"-ovalue", "written"+latest_snapshot, filesystem
|
||||
]
|
||||
|
||||
output=run(ssh_to=ssh_to, tab_split=False, cmd=cmd, valid_exitcodes=[ 0 ])
|
||||
|
||||
if output[0]=="0B":
|
||||
ret.append(filesystem)
|
||||
verbose("No changes on {}".format(filesystem))
|
||||
|
||||
return(ret)
|
||||
|
||||
|
||||
def zfs_autobackup():
|
||||
|
||||
@ -428,16 +540,6 @@ def zfs_autobackup():
|
||||
target_filesystems.append(args.target_fs + "/" + lstrip_path(source_filesystem, args.strip_path))
|
||||
|
||||
|
||||
### creating snapshots
|
||||
# this is one of the first things we do, so that in case of failures we still have snapshots.
|
||||
|
||||
#create new snapshot?
|
||||
if not args.no_snapshot:
|
||||
new_snapshot_name=args.backup_name+"-"+time.strftime("%Y%m%d%H%M%S")
|
||||
verbose("Creating source snapshot {0} on {1} ".format(new_snapshot_name, args.ssh_source))
|
||||
zfs_create_snapshot(args.ssh_source, source_filesystems, new_snapshot_name)
|
||||
|
||||
|
||||
### get resumable transfers
|
||||
resumable_target_filesystems={}
|
||||
if args.resume:
|
||||
@ -446,12 +548,44 @@ def zfs_autobackup():
|
||||
debug("Resumable filesystems: "+str(pprint.pformat(resumable_target_filesystems)))
|
||||
|
||||
|
||||
### get all snapshots of all selected filesystems on both source and target
|
||||
|
||||
### get all snapshots of all selected filesystems
|
||||
verbose("Getting source snapshot-list from {0}".format(args.ssh_source))
|
||||
source_snapshots=zfs_get_snapshots(args.ssh_source, source_filesystems, args.backup_name)
|
||||
source_snapshots=zfs_get_snapshots(args.ssh_source, source_filesystems, args.backup_name, also_bookmarks=True)
|
||||
debug("Source snapshots: " + str(pprint.pformat(source_snapshots)))
|
||||
# source_bookmarks=zfs_get_bookmarks(args.ssh_source, source_filesystems, args.backup_name)
|
||||
# debug("Source bookmarks: " + str(pprint.pformat(source_bookmarks)))
|
||||
|
||||
|
||||
#create new snapshot?
|
||||
if not args.no_snapshot:
|
||||
#determine which filesystems changed since last snapshot
|
||||
if not args.allow_empty:
|
||||
verbose("Determining unchanged filesystems")
|
||||
unchanged_filesystems=zfs_get_unchanged_filesystems(args.ssh_source, source_snapshots)
|
||||
else:
|
||||
unchanged_filesystems=[]
|
||||
|
||||
snapshot_filesystems=[]
|
||||
for source_filesystem in source_filesystems:
|
||||
if source_filesystem not in unchanged_filesystems:
|
||||
snapshot_filesystems.append(source_filesystem)
|
||||
|
||||
|
||||
#create snapshot
|
||||
if snapshot_filesystems:
|
||||
new_snapshot_name="@"+args.backup_name+"-"+time.strftime("%Y%m%d%H%M%S")
|
||||
verbose("Creating source snapshot {0} on {1} ".format(new_snapshot_name, args.ssh_source))
|
||||
zfs_create_snapshot(args.ssh_source, snapshot_filesystems, new_snapshot_name)
|
||||
else:
|
||||
verbose("No changes at all, not creating snapshot.")
|
||||
|
||||
|
||||
#add it to the list of source filesystems
|
||||
for snapshot_filesystem in snapshot_filesystems:
|
||||
source_snapshots.setdefault(snapshot_filesystem,[]).append(new_snapshot_name)
|
||||
|
||||
|
||||
#### get target snapshots
|
||||
target_snapshots={}
|
||||
try:
|
||||
verbose("Getting target snapshot-list from {0}".format(args.ssh_target))
|
||||
@ -483,21 +617,34 @@ def zfs_autobackup():
|
||||
if target_filesystem in target_snapshots and target_snapshots[target_filesystem]:
|
||||
#incremental mode, determine what to send and what is obsolete
|
||||
|
||||
#latest succesfully send snapshot, should be common on both source and target
|
||||
#latest succesfully sent snapshot, should be common on both source and target (at least a bookmark on source)
|
||||
latest_target_snapshot=target_snapshots[target_filesystem][-1]
|
||||
|
||||
if latest_target_snapshot not in source_snapshots[source_filesystem]:
|
||||
#find our starting snapshot/bookmark:
|
||||
latest_target_bookmark='#'+latest_target_snapshot[1:]
|
||||
if latest_target_snapshot in source_snapshots[source_filesystem]:
|
||||
latest_source_index=source_snapshots[source_filesystem].index(latest_target_snapshot)
|
||||
source_bookmark=latest_target_snapshot
|
||||
elif latest_target_bookmark in source_snapshots[source_filesystem]:
|
||||
latest_source_index=source_snapshots[source_filesystem].index(latest_target_bookmark)
|
||||
source_bookmark=latest_target_bookmark
|
||||
else:
|
||||
#cant find latest target anymore. find first common snapshot and inform user
|
||||
error="Cant find latest target snapshot on source, did you destroy it accidently? "+source_filesystem+"@"+latest_target_snapshot
|
||||
error_msg="Cant find latest target snapshot or bookmark on source, did you destroy/rename it?"
|
||||
error_msg=error_msg+"\nLatest on target : "+target_filesystem+latest_target_snapshot
|
||||
error_msg=error_msg+"\nMissing on source: "+source_filesystem+latest_target_bookmark
|
||||
found=False
|
||||
for latest_target_snapshot in reversed(target_snapshots[target_filesystem]):
|
||||
if latest_target_snapshot in source_snapshots[source_filesystem]:
|
||||
error=error+"\nYou could solve this by rolling back to: "+target_filesystem+"@"+latest_target_snapshot;
|
||||
error_msg=error_msg+"\nYou could solve this by rolling back to this common snapshot on target: "+target_filesystem+latest_target_snapshot
|
||||
found=True
|
||||
break
|
||||
if not found:
|
||||
error_msg=error_msg+"\nAlso could not find an earlier common snapshot to rollback to."
|
||||
|
||||
raise(Exception(error))
|
||||
raise(Exception(error_msg))
|
||||
|
||||
#send all new source snapshots that come AFTER the last target snapshot
|
||||
latest_source_index=source_snapshots[source_filesystem].index(latest_target_snapshot)
|
||||
send_snapshots=source_snapshots[source_filesystem][latest_source_index+1:]
|
||||
|
||||
#source snapshots that come BEFORE last target snapshot are obsolete
|
||||
@ -508,6 +655,7 @@ def zfs_autobackup():
|
||||
target_obsolete_snapshots[target_filesystem]=target_snapshots[target_filesystem][0:latest_target_index]
|
||||
else:
|
||||
#initial mode, send all snapshots, nothing is obsolete:
|
||||
source_bookmark=None
|
||||
latest_target_snapshot=None
|
||||
send_snapshots=source_snapshots[source_filesystem]
|
||||
target_obsolete_snapshots[target_filesystem]=[]
|
||||
@ -519,7 +667,7 @@ def zfs_autobackup():
|
||||
if send_snapshots and args.rollback and latest_target_snapshot:
|
||||
#roll back any changes on target
|
||||
debug("Rolling back target to latest snapshot.")
|
||||
run(ssh_to=args.ssh_target, test=args.test, cmd=["zfs", "rollback", target_filesystem+"@"+latest_target_snapshot ])
|
||||
run(ssh_to=args.ssh_target, test=args.test, cmd=["zfs", "rollback", target_filesystem+latest_target_snapshot ])
|
||||
|
||||
|
||||
for send_snapshot in send_snapshots:
|
||||
@ -530,19 +678,36 @@ def zfs_autobackup():
|
||||
else:
|
||||
resume_token=None
|
||||
|
||||
#hold the snapshot we're sending on the source
|
||||
zfs_hold_snapshot(ssh_to=args.ssh_source, snapshot=source_filesystem+send_snapshot)
|
||||
|
||||
zfs_transfer(
|
||||
ssh_source=args.ssh_source, source_filesystem=source_filesystem,
|
||||
first_snapshot=latest_target_snapshot, second_snapshot=send_snapshot,
|
||||
first_snapshot=source_bookmark, second_snapshot=send_snapshot,
|
||||
ssh_target=args.ssh_target, target_filesystem=target_filesystem,
|
||||
resume_token=resume_token
|
||||
)
|
||||
|
||||
#hold the snapshot we just send on the target
|
||||
zfs_hold_snapshot(ssh_to=args.ssh_target, snapshot=target_filesystem+send_snapshot)
|
||||
|
||||
#bookmark the snapshot we just send on the source, so we can also release and mark it obsolete.
|
||||
zfs_bookmark_snapshot(ssh_to=args.ssh_source, snapshot=source_filesystem+send_snapshot)
|
||||
zfs_release_snapshot(ssh_to=args.ssh_source, snapshot=source_filesystem+send_snapshot)
|
||||
source_obsolete_snapshots[source_filesystem].append(send_snapshot)
|
||||
|
||||
|
||||
#now that we succesfully transferred this snapshot, the previous snapshot is obsolete:
|
||||
#now that we succesfully transferred this snapshot, cleanup the previous stuff
|
||||
if latest_target_snapshot:
|
||||
#dont need the latest_target_snapshot anymore
|
||||
zfs_release_snapshot(ssh_to=args.ssh_target, snapshot=target_filesystem+latest_target_snapshot)
|
||||
target_obsolete_snapshots[target_filesystem].append(latest_target_snapshot)
|
||||
source_obsolete_snapshots[source_filesystem].append(latest_target_snapshot)
|
||||
|
||||
#delete previous bookmark
|
||||
zfs_destroy_bookmark(ssh_to=args.ssh_source, bookmark=source_filesystem+source_bookmark)
|
||||
|
||||
# zfs_release_snapshot(ssh_to=args.ssh_source, snapshot=source_filesystem+"@"+latest_target_snapshot)
|
||||
# source_obsolete_snapshots[source_filesystem].append(latest_target_snapshot)
|
||||
#we just received a new filesytem?
|
||||
else:
|
||||
if args.clear_refreservation:
|
||||
@ -558,7 +723,7 @@ def zfs_autobackup():
|
||||
|
||||
|
||||
latest_target_snapshot=send_snapshot
|
||||
|
||||
source_bookmark='#'+latest_target_snapshot[1:]
|
||||
|
||||
|
||||
############## cleanup section
|
||||
@ -609,10 +774,9 @@ def zfs_autobackup():
|
||||
|
||||
# parse arguments
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser(description='ZFS autobackup v2.1')
|
||||
parser = argparse.ArgumentParser(description='ZFS autobackup v2.2')
|
||||
parser.add_argument('--ssh-source', default="local", help='Source host to get backup from. (user@hostname) Default %(default)s.')
|
||||
parser.add_argument('--ssh-target', default="local", help='Target host to push backup to. (user@hostname) Default %(default)s.')
|
||||
parser.add_argument('--ssh-cipher', default=None, help='SSH cipher to use (default %(default)s)')
|
||||
parser.add_argument('--keep-source', type=int, default=30, help='Number of days to keep old snapshots on source. Default %(default)s.')
|
||||
parser.add_argument('--keep-target', type=int, default=30, help='Number of days to keep old snapshots on target. Default %(default)s.')
|
||||
parser.add_argument('backup_name', help='Name of the backup (you should set the zfs property "autobackup:backup-name" to true on filesystems you want to backup')
|
||||
@ -620,18 +784,20 @@ parser.add_argument('target_fs', help='Target filesystem')
|
||||
|
||||
parser.add_argument('--no-snapshot', action='store_true', help='dont create new snapshot (usefull for finishing uncompleted backups, or cleanups)')
|
||||
parser.add_argument('--no-send', action='store_true', help='dont send snapshots (usefull to only do a cleanup)')
|
||||
parser.add_argument('--resume', action='store_true', help='support resuming of interrupted transfers by using the zfs extensible_dataset feature (both zpools should have it enabled)')
|
||||
|
||||
parser.add_argument('--allow-empty', action='store_true', help='if nothing has changed, still create empty snapshots.')
|
||||
parser.add_argument('--resume', action='store_true', help='support resuming of interrupted transfers by using the zfs extensible_dataset feature (both zpools should have it enabled) Disadvantage is that you need to use zfs recv -A if another snapshot is created on the target during a receive. Otherwise it will keep failing.')
|
||||
parser.add_argument('--strip-path', default=0, type=int, help='number of directory to strip from path (use 1 when cloning zones between 2 SmartOS machines)')
|
||||
parser.add_argument('--buffer', default="", help='Use mbuffer with specified size to speedup zfs transfer. (e.g. --buffer 1G)')
|
||||
|
||||
|
||||
parser.add_argument('--destroy-stale', action='store_true', help='Destroy stale backups that have no more snapshots. Be sure to verify the output before using this! ')
|
||||
parser.add_argument('--clear-refreservation', action='store_true', help='Set refreservation property to none for new filesystems. Usefull when backupping SmartOS volumes. (recommended)')
|
||||
parser.add_argument('--clear-mountpoint', action='store_true', help='Sets canmount=noauto property, to prevent the received filesystem from mounting over existing filesystems. (recommended)')
|
||||
parser.add_argument('--filter-properties', action='append', help='Filter properties when receiving filesystems. Can be specified multiple times. (Example: If you send data from Linux to FreeNAS, you should filter xattr)')
|
||||
parser.add_argument('--rollback', action='store_true', help='Rollback changes on the target before starting a backup. (normally you can prevent changes by setting the readonly property on the target_fs to on)')
|
||||
parser.add_argument('--ignore-transfer-errors', action='store_true', help='Ignore transfer errors (still checks if received filesystem exists. usefull for acltype errors)')
|
||||
|
||||
|
||||
parser.add_argument('--compress', action='store_true', help='use compression during zfs send/recv')
|
||||
parser.add_argument('--test', action='store_true', help='dont change anything, just show what would be done (still does all read-only operations)')
|
||||
parser.add_argument('--verbose', action='store_true', help='verbose output')
|
||||
parser.add_argument('--debug', action='store_true', help='debug output (shows commands that are executed)')
|
||||
@ -639,5 +805,11 @@ parser.add_argument('--debug', action='store_true', help='debug output (shows co
|
||||
#note args is the only global variable we use, since its a global readonly setting anyway
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
zfs_autobackup()
|
||||
try:
|
||||
zfs_autobackup()
|
||||
except Exception as e:
|
||||
if args.debug:
|
||||
raise
|
||||
else:
|
||||
print("* ABORTED *")
|
||||
print(str(e))
|
||||
|
||||
Reference in New Issue
Block a user