[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH] Xend XML-RPC Support



On Tue, Mar 21, 2006 at 06:08:47PM -0600, Anthony Liguori wrote:

> Ewan Mellor wrote:
> >On Tue, Mar 21, 2006 at 05:40:05PM -0600, Anthony Liguori wrote:
> >
> >  
> >>First off, xm-test is not passing 100%. The failures are all block 
> >>related and I've looked at each one and it appears to die for the same 
> >>reason the control runs are dying for (unable to very partition is 
> >>mounted with /proc/partitions). Every time I run xm-test, I get 
> >>different results (in the control) so it's hard to know for sure if this 
> >>introduces additional block regressions but I don't think it does.
> >>
> >>Everything else passes consistently. I actually went ahead and made 
> >>VmError and XendError inherit from xmlrpc.Fault which means they show up 
> >>for the client as you'd expect.
> >>
> >>I'm submitting this so that others can pound on it. I'll keep looking at 
> >>the block failures to see if we can't fix that too. It's all one big 
> >>patch since with the changes Ewan requested, it's not so easy to 
> >>separate anymore.
> >>    
> >
> >Neither of the new files (XMLRPCServer.py and xmlrpclib2.py) have made it 
> >into
> >this patch.  Could you resubmit?  Thanks,
> >  
> 
> Sorry, when I was collapsing to a single changeset I forgot to hg 
> addremove.  New patch attached.

Applied, as you've no doubt seen by now.  Thanks for all your hard work,
Anthony, and thanks to IBM too.

For 3.0.2, we now need to find that bug that's affecting your block
devices (I'm not seeing that regression here, so if it's still affecting
you, then some aggressive debugging is in order!).  Then, we just need
to get everyone to hit the new XML-RPC layer, shake out any remaining
bugs, and we're good to go, I think.

Cheers,

Ewan.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.