[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch 3/7] pvSCSI driver



Hi Ian-san and Steven-san,

Thank you for comments.

In fact, previous version of pvSCSI driver used 2 rings for frontend
to backend and backend to frontend communication respectively. The
backend also queued requests from frontend and released the ring
immediately. This may be very similer concept to the Netchannel2.

However, this version went back to simple 1 ring architecture as same
as VBD. We expect the performance will not be degraded because many
transactions are distributed into multiple-rings.

We would like to enhance it as second step after this version is
merged into Xen tree, if possible.


Best regards,


On Wed, 27 Feb 2008 12:23:28 -0000
"Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx> wrote:

> > I think the current netchannel2 plan also calls for variable-sized
> > messages with split front->back and back->front rings.  It might be
> > possible to share some code there (although at present there doesn't
> > exist any code to share).
> > 
> > I'd also strongly recommend supporting multi-page rings.  That will
> > allow you to have more requests in flight at any one time, which
> should
> > lead to better performance.
> 
> 
> The PV SCSI stuff is great work and I'm very keen to get it into
> mainline. However, I'd very much like to see it use the same flexible
> ring structure that's being used for netchannel2. The main features are
> as followed:
> 
> * A pair of rings, one for communication in each direction (responses
> don't go in the same ring as per original netchannel)
> 
> * The rings are fixed in size at allocation time, but the area of memory
> they are allocated in may be bigger than a page, i.e. a list of grant
> refs is communicated over xenbus.
> 
> * The data placed on the rings consists of 'self describing' messages
> containing a type and a length. Messages simply wrap over the ring
> boundaries. The producer simply needs to wait until there is enough free
> space on the ring before placing a message.
> 
> * Both the frontend and the backend remove data from the rings and place
> it in their own internal data structures eagerly. This is in contrast to
> the netchannel where free buffers and TX packets were left waiting on
> the rings until they were required. Use of the eager approach enables
> control messages to be muxed over the same ring.  Both ends will
> advertise the number of outstanding requests they're prepared to queue
> internally using a message communicated over the ring, and will error
> attempts to queue more. Since the backend needs to copy the entries
> before verification anyhow this minimal additional overhead.
> 
> 
> Best,
> Ian
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Jun Kamada



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.