rbd is in testing. Have a look at the link, but here are some of the highlights
* network block device backed by objects in the Ceph distributed object store (rados)
* thinly provisioned
* image resizing
* image export/import/copy/rename
* read-only snapshots
* revert to snapshot
* Linux and qemu/kvm clients
We are doing something like this now, to a degree, with a mashup of tools in our target.pl creator. Though not likely as nice/clean as this.
Ceph builds upon BTRFS, which is an excellent underlying file system, also maturing alongside Ceph. BTRFS has been called Linux’s answer to ZFS, but if you go through a detailed design analysis comparison, you will see that BTRFS gets a number of things right that zfs doesn’t. From the article at the always wonderful LWN.net:
Yeah, I know. This will bring the “zfs is the last file system you will ever need” folks out of the woodwork. Whatever.
The point is, Ceph builds upon BTRFS. And exploits much of the goodness of BTRFS.
In lab, we have our stable 220.127.116.11.scalable kernel, and I’ve tried some BTRFS bits. Still some crashes, but we also have a testing 18.104.22.168 kernel, and it looks like 2.6.36 is going to pop soon, so we might just wait for that for our next testing group. Our stable kernels have been 2.6.23.x, 2.6.28.x, 2.6.32.x and we are planning on a .36 or .37 kernel as the next one.
Also, we hope to have (soon) in lab, a siCluster based testbed specific for these nice parallel file systems.
Hopefully more news on this soon.