[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Fwd: Re: [Xen-devel] [PATCH] skeleton frontend/backend examples and a deadlock]



--- Begin Message ---
On Thu, 2005-11-03 at 02:17 +0000, Mark Williamson wrote:
> A few random questions:
> 
> * Does XenIDC have any performance impact?

I expect so :-)  The slight extra complexity might make it go slower or
alternatively, because all the common code is in one place (so when you
optimise it you improve all the drivers) it might make it go faster :-)

The only think I can think of which is non-optimal about the design of
the API from a performance perspective is that a network transparent
implementation wouldn't easily be able to couple the transaction
completion to the completion of a bulk data send.  This isn't an issue
until the IDC mechanism has to span nodes in a cluster though which is
probably a way off for Xen. The existing driver code of course has much
bigger problems with network transparency.

The implementation probably needs some performance tweaking to get
batching working correctly and possibly to do speculative interrupt
handling.  Unfortunately I was forced to get a couple of patents on this
stuff a while back and I'm not sure if I'm allowed to put it in.  I'll
look into it when I have the code working.

> * Can it be compatible with the current ring interface, or does it imply 
> incompatibility with the existing scheme? (i.e. is it an "all or nothing" 
> patch?)

The API implementation isn't binary compatible with the ring code in the
other drivers or the current way the store is used to set up the ring
interface but it's not an all or nothing patch because it can coexist
side-by-side with the other drivers.

The endpoint does use shared pages for a ring buffer but it shares two
pages, one from the FE and one from the BE, each read-only.  I did this
because it's a simple symmetric implementation which was the quickest
for me to implement and I think it is easy to understand.  It also has
the advantage that if the ring gets scribbled on you can point the
finger at which domain was likely to be responsible.  I'm not aware of
any security implications.

The format of the data in the ring is slightly more complicated too
because the code is generic to cope with the varying size of different
clients requests.

I use the store differently because I wanted to correctly handle suspend
resume and loadable modules.  I don't think any of the other drivers do
this correctly yet.  Not even Rusty's skeleton driver.

The API though is completely decoupled from the implementation so you
could change the underlying implementation to go back to a single page
given from the FE to the BE or anything else you like.  I doubt you'd be
able to make the implementation binary compatible with the existing code
without having some special cases in it for the existing code.

> * Will it be able to leverage page transfers?

I expect so.  I used the local/remote buffer reference abstraction for
the bulk data transfer.  You could define a local buffer reference for
memory that was intended for transferring.  This could be converted into
a new kind of remote buffer reference which could be interpreted
accordingly at the destination.  The implementation is designed to be
extended with an arbitrary number of different types of buffer
references.

I've almost finished the rbr_provider_pool which is the FE side of the
bulk data transfer mechanism.  When I send out a patch with this code in
it will demonstrate how the local and remote buffer references are used.

Harry.

--- End Message ---
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.