.. Ross Patterson's Blog imported post, created by `$ ./bin/rfc822-to-post` on Mar 15, 2021. .. meta:: :description: Using COW to get a usable local setup when developing upgrade procedures for messy sites. :keywords: Plone, Zope .. post:: Feb 29, 2012 :tags: Plone, Zope :author: Ross Patterson :redirect: @@redirect-to-uuid/79c650f14d3c43058f471208471d8f08 #################################### Local Development for Large Upgrades #################################### Using COW to get a usable local setup when developing upgrade procedures for messy sites. I've had to do way too many Plone upgrades where the sites are just a horrible nest of bad administration and maintenance, worse customizations, and the worst add-ons. A site in such a state is going to take a lot of small adjustments and additions to the migration procedure to get it back into a healthy enough state for the Plone upgrades to run. The kinds of problems you run into on such a site are often very specific to corners of functionality and very specific to deep dark corners of the content heirarchy. I've found no better way to work on this than to implement upgrade steps, run the upgrade, fix any errors, QA the completed upgrade, write new upgrade steps and refine the existing ones and repeat ad nauseam. Since running the upgrade procedure can take a long time, the iterations take a long time. This is compounded by the amount of time it can take to transfer the DB, no matter how you do it. Using `ZEO `_ over remote is very slow, I assume due to the latency, from looking at `iftop `_ output. Syncing the `BLOBs `_, even with a well tuned `rsync `_, script can take forever. I finally found a set of approaches that brings my development of upgrade steps up to a tolerable speed. It's still slower than "normal" development, but it no longer feels maddening. Firstly, I developed an `upgrade runner `_ that commits upgrades per profile version increment so that I don't have to start the whole procedure over. It's called ``collective.upgrade`` and I plan to cut a release along with a more detailed blog post once I've deployed this current upgrade. Secondly, I used a `union mount `_ (`UnionFS `_, `AUFS `_, etc.) to get `copy-on-write `_ behavior for my ``var/filestorage/Data.fs`` and ``var/blobstorage/``. IOW, whenever the upgrade procedure reads a BLOB, it gets it from the production blobstorage directory via a network filesystem (`SSHFS `_, `CIFS/SMB `_, `NFS `_, etc.). When writing to a BLOB, however, it writes it to a local directory and will use that version of the BLOB in the future. Since BLOBs are very often compressed images and files, I find no penalty in just letting the network FS do a dumb transfer of BLOBs as opposed to compressing with ``rsync -z``. I also use the same setup for ``Data.fs``, but since it's much smaller than the BLOBs and it's much more heavily used than the BLOBs, I've found it best to just ``rsync`` the ``Data.fs`` and ``Data.fs.index``. With this setup I can test the upgrade at nearly local speeds and my upgrade step development is much faster for having all my favorite tools. Here's some scripts I'm using to do this. It's very important that you mount the prod newtork FS as read-only with the ``-o ro`` mount option:: sudo mount -v -t var/prod -o ro, sudo mount -v -t aufs none var/filestorage -o dirs=var/filestorage.prod:var/prod/var/filestorage=rr sudo mount -v -t aufs none var/blobstorage -o dirs=var/blobstorage.prod:var/prod/var/blobstorage=rr When it's time to refresh to the latest prod, shutdown `Zope `_ and ZEO:: rsync -Paz /var/filestorage/Data.fs /var/filestorage/Data.fs.index var/filestorage.prod/ rsync -Paz --existing /var/blobstorage/ var/blobstorage.prod/ The second command there will only revert BLOBs that were already copied locally back to the prod version. Enjoy, but be *very* careful that you don't accidentally apply changes to prod. Backup prod and make *sure* your prod network mount is *read-only*. .. update:: Feb 29, 2012 Imported from Plone on Mar 15, 2021. The date for this update is the last modified date in Plone.