XS backup restore: Difference between revisions
No edit summary |
|||
Line 26: | Line 26: | ||
''<protocol version>'' is the integer representing the latest |
''<protocol version>'' is the integer representing the latest |
||
backup protocol version supported by this XO. In protocol version 1, |
backup protocol version supported by this XO. In protocol version 1, |
||
a successful reply |
a successful reply is a 200 OK with an empty body. |
||
'''timestamp''' -- timestamp of latest backed up item for this user |
|||
or 0 if there are no previous backups |
|||
If the sent protocol version is not supported by the school server, |
If the sent protocol version is not supported by the school server, |
||
Line 97: | Line 94: | ||
5. Return |
5. Return a 200 OK response. |
||
response. |
|||
When the rsync-over-ssh connection comes in, we need to have an rsync wrapper |
When the rsync-over-ssh connection comes in, we need to have an rsync wrapper |
||
Line 114: | Line 110: | ||
** It might be a good idea to spot partial/failed backups and checkpoint/shadow them anyway. If our handling of inconsistent data is reasonably good, a partial backup might be a passable data source for per-document restores. |
** It might be a good idea to spot partial/failed backups and checkpoint/shadow them anyway. If our handling of inconsistent data is reasonably good, a partial backup might be a passable data source for per-document restores. |
||
* A low-freq cronjob runs [http://code.google.com/p/hardlinkpy/ hardlink.py] |
* A low-freq cronjob runs [http://code.google.com/p/hardlinkpy/ hardlink.py] |
||
* A cronjob removes old pdumpfs snapshots |
* A cronjob removes old pdumpfs snapshots, ideally with some auto-tuning for space usage. |
||
= XO-initiated full restore = |
= XO-initiated full restore = |
||
Line 148: | Line 144: | ||
to apply some checks. |
to apply some checks. |
||
Check |
|||
- do the files named by the metadata file exist? |
|||
The race conditions that exist during the backup generation mean that |
The race conditions that exist during the backup generation mean that |
||
Line 178: | Line 175: | ||
Return the directory path in a '200' response. |
Return the directory path in a '200' response. |
||
= Listing of stored backups = |
|||
The XS will also answer requests to |
|||
/backup/''<protocol version>''/list/''<SN>'' |
|||
with a 200 OK message with a body of new-line-separated paths to available snapshots. The XO client can then initiate a restore of any of those available snapshots over SSH. |
Revision as of 03:06, 25 April 2008
Goals
- Simple, efficient (minimise processing, traffic), quick dev turnaround, debuggable
- Sane, fail-safe, atomic-ish
- Independent of the actual storage strategy (DS-agnostic)
- And yet, it must work well with the current DS (as of April 2008), and avoid restricting the evolution of the DS
- Safe for XO and XS
- The server can refuse to backup due to traffic/load
- Simple version negotiation
- Supports full homedir restore
- Supports per-document restore (via journal and/or webbased)
- There is some interest in leveraging a webbased 'document restore' facility as 'async document share/publish' mechanism.
Notes
- All timestamps are integers representing seconds elapsed since the UNIX epoch.
- There is a REST meta-protocol versioning scheme. Outside of that initialcheck, what this page describes is the version "1" of the backup/restore protocol.
XO-initiated backup
XO side
1. Issue a HTTP GET to XS with path /backup/<protocol version>/available/<this_XO_serial_number> <protocol version> is the integer representing the latest backup protocol version supported by this XO. In protocol version 1, a successful reply is a 200 OK with an empty body. If the sent protocol version is not supported by the school server, it will return a 404 not found error, whose only body contents is a comma-separated list of integers representing the backup protocol versions supported by this school server. If this school server refuses to provide backup service for this XO, it will return a 403 forbidden error. If the school server is too busy to deal with the XO's backup request, it will return a 503 service unavailable error. The XO will sleep 5 minutes and retry. 2. If the request in step 1 succeeded, go to step 3. Otherwise, if none of the backup system versions on the XO (multiple may be present) are in the 'versions' variable listed in the 404 error, abort until next scheduled backup time (we cannot back up to this XS). If a version was returned that also exists locally, go back to step 1 and use that protocol version. 3. Write out all the metadata for all the documents available for backup, in CanonicalJSON format. Save it as metadata.json overwriting (atomically) any previously existing version. 4. Run rsync-over-ssh between the datastore and a remote directory called datastore-current/ in the user's home directory on the XS. The remote datastore-current directory will have a complete set of files so use the rsync facilities available to optimise the transfer and delete stale files: --times --partial (to make retries faster) --delete Check the exit value from rsync. If non-zero, retry up to 3 times. If still non-zero, abort until next backup. 5. Store the epoch of the end time of step 4.
Note: This backup scheme is not atomic. Users of the backed-up data must be prepared for slightly inconsistent state between metadata and files - a large window exists between steps 3 and 5. Solutions to this could come from the FS (a ZFS-like implementation) or from a higher-level layer (a git-based DS for example).
XS side
On the school server, when getting a request for /backup/<protocol version>/available/<SN>:
1. Check if we support the protocol version. If not, return 404 and a list of supported versions. Otherwise, proceed. 2. Check if we know this machine (can find it in our registration DB on the XS). If not, return 403. We will not offer it backup service. Check if we're too busy to process another concurrent backup (e.g. based on transfer rate or number of rsync processes), if so, return 503. 3. Check if backups for this machine exist. In protocol version 1, if backups don't exist, let timestamp be 0. Otherwise, find the timestamp of the last backed-up object for this machine and return it. 4. Check system and network load metrics - can we offer service to this client? 5. Return a 200 OK response.
When the rsync-over-ssh connection comes in, we need to have an rsync wrapper script that will
1. Establish a lock using flock to prevent overlaps 2. Cleanup/sanitise parameter list to rsync 3. Upon successful completion, set a success flag
XS maintenance
- A regular cronjob checks for recent success flags. Home directories that are marked as successfully backed up will be 'shadowed' with a hardcopy script similar to pdumpfs.
- It might be a good idea to spot partial/failed backups and checkpoint/shadow them anyway. If our handling of inconsistent data is reasonably good, a partial backup might be a passable data source for per-document restores.
- A low-freq cronjob runs hardlink.py
- A cronjob removes old pdumpfs snapshots, ideally with some auto-tuning for space usage.
XO-initiated full restore
XO side
1. Issue a HTTP GET to the XS with path /backup/<protocol version>/restore/<this_XO_serial_number> The response is 0 or a single absolute path on the XS, pointing to the location of this XO's backup files in the backup hierarchy. If the response is 0, abort and report to user; there are no backups to restore. Otherwise store the path variable for future use. If the request returns a 500, abort and report to user that they must pick out restore files individually from the web interface. If the request returns a 503, wait 1 minute, then retry step 1, otherwise proceed. 2. rsync the directory provided in step 1, restoring mode and times. Retry 3 times; if still failing, abort restore and report to user. (Do we need to remove the fetched files in case of a dropped rsync? rsync guarantees we won't get partial files in place, so it is reasonably safe, and makes retries "incremental". As long as the metadata is restored only once step 2 succeeds, the Journal should be ok...) 3. Rebuild the metadata in Xapian, based on the metadata.json file that should have been restored by rsync in #2. Might make sense to apply some checks. Check - do the files named by the metadata file exist? The race conditions that exist during the backup generation mean that the document may have changed or vanished after the metadata was created. 4. We have succeeded with the restore. Inform user. Eat some ice cream. Dance salsa
XS side
On the school server, when getting a request for /backup/<protocol version>/restore/<SN>:
1. Check if we support the protocol version. If not, return 404 and a list of supported versions. Otherwise, proceed. 2. Check if backups for this machine exist. If not, return 200 OK whose only body contents is 0. Otherwise, proceed. 3. Check for system and network traffic load metrics. Return 503 for "not now". 4. Find the latest complete backup - it should be the most recent directory following this format in the home directory for the laptop: ~/datastore-YYYY-MM-DD Note: 'Most recent' should be intepreted on the parsed datestamp from the directory name, not the FS ctime/mtime. Return the directory path in a '200' response.
Listing of stored backups
The XS will also answer requests to
/backup/<protocol version>/list/<SN>
with a 200 OK message with a body of new-line-separated paths to available snapshots. The XO client can then initiate a restore of any of those available snapshots over SSH.