[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] SAN + XEN + FC + LVM question(s)



On Tue, Sep 16, 2008 at 11:08:39AM +0200, Ferenc Wagner wrote:
> "Javier Guerra" <javier@xxxxxxxxxxx> writes:
> 
> > On Mon, Sep 15, 2008 at 11:02 AM, Wendell Dingus <wendell@xxxxxxxxxxxxx> 
> > wrote:
> >> Are there pitfalls or limitations I've not thought of here though? Is this
> >> approach a "best practices" or is some other method considered "better"?
> >
> > AFAICS, you're right and ready to go.
> >
> > the only thing i don't like too much is using LVM inside DomU's.  in
> > theory some scanning tools could confuse the LVM metadata inside those
> > volumes with the 'outer' LVM metadata and wreck the whole thing.  i
> > don't think it really happens in practice...
> 
> You can easily sidestep this by setting up appropriate "filter" rules
> in lvm.conf in you dom0s.  That way pvscan won't look into your LVs
> thus it won't discover the embedded guest LVs.  Still, if you shut
> down a guest and want to access its LVs from dom0, you are perfectly
> able to: just modify your filters, pvscan, and there you go.  Just
> don't forget to undo that before starting up the guest again.
> 
> One thing I contemplated over was using "multipath" in the guest.
> That may give you a chance to propagate the resize into the guest
> without rebooting it or permanently introducing another PV in it.
> Like this: resize the LV in the dom0, pass it to the guest as another
> block device, replace the LVM mappings so that they point to the new
> device, then detach the original device, pvextend etc.  I'm not sure
> multipath-tools would work in this setup, but dmsetup surely would.
> I never tried it, though.

Yes, dm-multipath should allow that.

Check https://www.redhat.com/archives/dm-devel/2008-August/msg00033.html

-- Pasi

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.