[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] CoW scripts.



On Sun, 2014-07-27 at 17:16 +0200, Florian Heigl wrote:
> Hi,
> 
> in the book of Xen thereâs this small bit on copy on write on the old Xen 
> demo CD.
> 
> â...where 30 is the size of the backing (rw) storage overlay. The backing 
> storage is actually a file in /tmp, which is mounted as a loop device and set 
> up using LVM. /usr/sbin/create_cow creates the storage, /usr/sbin/destroy_cow 
> deletes it. This script is called by/etc/xen/scripts/block-cow which in turn 
> is being called by /etc/xen/scripts/block when âcow:â is specified as block 
> device type in the Xen VM configurationâ
> 
> 
> Would anyone happen to still have create_cow and block-cow around? (Or the 
> demo CD)

Google for "xen block-cow" seems to pick up a few things (github repos
etc) which seem to have a block-cow.c of some sort, is that what you
need? I'm not having so much luck with create_cow though.

Do you have a filename for the demo CD? I can have a look for a copy on
e.g. xenbits etc.
 
> Weâre currently working on Ceph support(Yes! more later...) and I would like 
> to give auto-cloning images a whirl.
> 
> Are there any more recent (Xen 4.2+) attempts at copy on write that worked / 
> are in use anywhere?

None that I'm aware of I'm afraid, but it's entirely plausible that I've
missed something since I don't follow the block side of things all that
closely.

> (I donât care about research papers with no sources online ;)
> 
> Have a nice sunday / week!
> 
> Florian
> 
> 
> About Ceph support for Xen 
> you can have a peek at https://github.com/FlorianHeigl/xen-ceph-rbd 
> ...right now we still need to get pygrub working
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.