[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Re: [Xen-devel] release of 'xapi' toolstack

Unfortunately, the src rpms are only available in the source isos, in  
this case, source-1.iso. You can download that here:


Obviously this is a less-than-ideal distribution mechanism, and we're  
going to have to fix this! For now, I'll send you out-of-band the src  
rpm for dm-multipath that contains the patches I mentioned.

Hope this helps!


On 6 Nov 2009, at 09:54, Pasi Kärkkäinen wrote:

> On Wed, Nov 04, 2009 at 03:23:53PM +0000, Jonathan Ludlam wrote:
>> Hi Pasi,
>> Julian Chesterfield (cc'd) is the person responsible for the storage
>> backends - I'm sure he'll point you in the right direction.
> Ok.
>> Anyway, if you're planning on hacking on the multipathing code,
>> there are a couple of things you should probably be aware of -
>> the version of multipathd that we ship differs in some important
>> ways from the stock CentOS one. There are a couple of incidental
>> patches that do things like alert when paths go up/down, but the
>> most important one is the patch that stops multipathd from listening
>> for uevents. The CentOS version has both multipathd listening for
>> uevents and multipath invoked from the udev scripts, which was racy
>> and we found that it was difficult to decide when the process of
>> adding a LUN had completed. What we've done is ensure that everything
>> is done synchronously, so the backends explicitly tell multipathd
>> (through the multipathd cli interface -- which is *not* the same as
>> the command 'multipath'!) when paths arrive and disappear.
>> So when a path appears, we effectively do:
>> # echo "add path sd<x>" | multipathd -k
>> When this command returns, we know that multipathd has finished
>> processing the device, and the devmapper node has appeared.
> Are these patches available from somewhere?
>> The only thing that is done automatically from udev is a failsafe
>> rule that's executed when a device is removed from the system.
>> This rule tells multipathd to forget about the particular path,
>> which prevents a possible condition where LVM waits indefinitely
>> trying to access a multipathed device where all the paths have  
>> vanished.
>> This is probably the most important aspect of how multipath works
>> currently - obviously there's some more detail, and we'll have to
>> write this up on the xen.org wiki at some point.
> Thanks for the info. This is indeed good to know beforehands :)
> -- Pasi
>> Cheers,
>> Jon
>> -----Original Message-----
>> From: xen-api-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-api- 
>> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Pasi Kärkkäinen
>> Sent: 03 November 2009 19:51
>> To: Dave Scott
>> Cc: 'Mark Johnson'; xen-devel@xxxxxxxxxxxxxxxxxxx; 
>> xen-api@xxxxxxxxxxxxxxxxxxx
>> Subject: [Xen-API] Re: [Xen-devel] release of 'xapi' toolstack
>> On Tue, Nov 03, 2009 at 06:35:16PM +0000, Dave Scott wrote:
>>> Mark Johnson wrote:
>>>> Other than the GUI, what will remained closed source in the
>>>> XenServer product?  i.e. are there any extensions to the cli,
>>>> xapi? Any additional libs not present in xen-api-libs.hg?
>>>> Any extensions to blktap?
>>> At present a few server-side pieces are not open-source. These are  
>>> (from memory):
>>> 1. the heartbeat/liveset management daemon which is needed for HA  
>>> (xapi talks to this via a simple interface)
>>> 2. some 3rd party FC tools
>>> 3. a few storage backends (NetApp, EQL and StorageLink)
>> Hmm.. Citrix XenServer doesn't currently support iSCSI multipathing  
>> with
>> EQL storage. I've understood the EQL storage backend is mostly for  
>> other
>> features (snapshots, cloning etc), so now I could actually help fix  
>> the
>> multipathing stuff..
>> Any pointers where to look for the iSCSI multipathing stuff?
>> -- Pasi
>> _______________________________________________
>> xen-api mailing list
>> xen-api@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/mailman/listinfo/xen-api

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.