Moving Servers

The problem

I took up some kind of challenge to reinstall a live server with little downtime.

I tested Alpine Linux as a Xen host OS and Alpine worked out nicely for me and so I've declared it a "standard". I've then run a lot of testing and reinstalled my test hosts to use Alpine.

Once you have a standard you of course notice there's this one odd server that actually does have running VMs for customers or yourself like i.e. the one for Confluence. And of course thats the one still running Debian.

So I need to come up with a way to get Alpine on this box without losing all my data.

Basically, I don't need any backup, etc. since I had two partitions on my hosts' disk, one has a vg00 holding the OS only, and one has a vg01 holding all the virtual machines and other data. This is not as beautiful as a separate Raid lun for the OS, but as long as You don't totally mess up your partition tables, it's a smooth ride.

  • Reinstall the OS manually into a chroot inside the old vg00 partition
  • once it's really bootable (hard part)
  • re-import the existing data VG.
  • Done. 

But maybe it would still be a bit nicer to have a fully working and current backup?

So I rented a server from the same ISP for only one month-term, running 44 Euros and will use that as my backup destination.

Also my $oldjob will soon need to run a similar scenario, they have to move from their current server to one at a different ISP. They never set up LVM so many efficient options to do this are not available.

 

Most of my thoughts point to using the temporary system as an iSCSI target and doing any kind of abuse for the actual mirroring. Even DD is a nice choice if you have a block target.

 

The setup:

Server A:

  • Debian Squeeze, Xen4.0
  • MD Raid made up of two 750G SATA disks
  • Partitions md0, md1
  • Volume Groups (vg00 - md0) and (vg01 - md1)
  • 100mbit uplink for all traffic, no crossconnect to server B

Server B:

  • Will get Alpine Linux installed
  • MD Raid made up of two 750G SATA disks
  • OS needs about 1G space, setup can be any I chose, but should probably match serverA's final setup
  • 100mbit uplink for all traffic, no crossconnect to server A

 

Lets look at 

Different options

 

DR:BD

  • Attaching / Detaching
  • Efficient first-time sync
  • Efficient bandwidth throttling
  • Well structured
  • Needs metadata for journal, either increase source LVs or prepare extra LV for metadata
  • Better to mirror on PV level imho, unless you don't care if source is 100% equal to copy
  • VMs could run off the DR:BD device even while we're syncing back.

Rsync

  • use LVM Snapshot
  • LVM Snapshot incurs performance hit on live system.
  • use --inplace on all runs, it will not copy an outdated file, instead it will change it in place
  • use compression (but maybe not default one, if you want things to be lightweight)
  • use bandwidth limits
  • Restore is mindless easy, but generally rsync is very very slow.

MD Raid

  • attach remote storage to existing raid
  • rebuild from remote storage on new server
  • nice: binary copy and no messing with headers except the (already existing) raid headers, which are also not getting "faked" or anything
  • bandwidth throttling and bandwidth guarantees
  • write-only mode would avoid reading from remote replica
  • Change bitmap is already enabled on my server (mdadm -binternal)
  • VMs could run off the DR:BD device even while we're syncing back

LVM Mirror:

  • Creating a bunch of LVM Snapshots and splitting them into a new VG on an iSCSI backed PV is a perfectly valid way of doing this.
  • Sweet since it's all in the LVM layer
  • Sweet if a recovery point in the past is acceptable (because it'll copy after the snapshot is taken)
  • Probably the best way to do this if it needs to run cyclic.
  • VMs could run off the mirror, but mirroring back would involve relying on the LVM mirror policies which are not good enough. Even if you use the "Disk TAGs" the mirror log can get disabled and all kind of crap. So it's good for creating the backup, but not for merging the backup back in.

 

DM Replication

  • DM has a replication target in newer kernels
  • Nobody documented it, probably not the best idea
  • Nobody uses it, probably not the best idea to be first
  • Probably not available on Debian Squeeze

 

Restore from Backup

  • this can be the most bandwidth efficient
  • extra charming since the backup is "out of band" from the source's point of view
  • my backups are all client-side compressed, meaning they'll uncompress at nice, high rates, but not above 90mbit/s
  • need to recreate a lot (VGs, LVs, VM Configs)
  • restored system is not really the one it used to be (FS UUIDs, Labels, ...)
  • Great for VMs that have just one 30GB sda1 partition or stuff like that.
  • Not feasible for non-idiot VMs, i.e. ones with valid partitioning or LVM.
  • a nice opportunity to test

 

What I did in the end:

Used lvm to mirror the data against iSCSI

exported / split the VG

Only re-installed alpine manually in the first 16GB of the old disks, the data vg started later on the disk so it could stay unharmed.

Re-glued everything

Done. (So the mirror was just a backup)

I adopted the lesson learned from this: the first 64GB of every sw raid disk or the first 64GB LUN on a raid HBA are for my OS.