[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [Patch 3/7] pvSCSI driver



> In fact, previous version of pvSCSI driver used 2 rings for frontend
> to backend and backend to frontend communication respectively. The
> backend also queued requests from frontend and released the ring
> immediately. This may be very similer concept to the Netchannel2.

Cool, that sounds better. Did you still have fixed length command
structs, or allow variable length messages? (I'm very keen we use the
latter)

Also, were the rings multi-page?

> We would like to enhance it as second step after this version is
> merged into Xen tree, if possible.

The problem with this approach is that it would change the ABI. The ABI
isn't guaranteed in the unstable tree, but it would have to be locked
before 3.3 could be released (or the code removed/disabled prior to
release).

It's preferable to get stuff like this fixed up before it goes in the
tree as in our experience developers often get retasked by their
management to other work items as soon as the code goes in, and don't
get around to the fixups.  Against that, getting it in the tree exposes
it to more testing earlier, which is helpful. If you're confident that
the former is not going to happen to you, let's talk about which minor
cleanups are important.

Thanks very much for your work on this project!

Best,
Ian

> 
> 
> Best regards,
> 
> 
> On Wed, 27 Feb 2008 12:23:28 -0000
> "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx> wrote:
> 
> > > I think the current netchannel2 plan also calls for variable-sized
> > > messages with split front->back and back->front rings.  It might
be
> > > possible to share some code there (although at present there
> doesn't
> > > exist any code to share).
> > >
> > > I'd also strongly recommend supporting multi-page rings.  That
will
> > > allow you to have more requests in flight at any one time, which
> > should
> > > lead to better performance.
> >
> >
> > The PV SCSI stuff is great work and I'm very keen to get it into
> > mainline. However, I'd very much like to see it use the same
flexible
> > ring structure that's being used for netchannel2. The main features
> are
> > as followed:
> >
> > * A pair of rings, one for communication in each direction
(responses
> > don't go in the same ring as per original netchannel)
> >
> > * The rings are fixed in size at allocation time, but the area of
> memory
> > they are allocated in may be bigger than a page, i.e. a list of
grant
> > refs is communicated over xenbus.
> >
> > * The data placed on the rings consists of 'self describing'
messages
> > containing a type and a length. Messages simply wrap over the ring
> > boundaries. The producer simply needs to wait until there is enough
> free
> > space on the ring before placing a message.
> >
> > * Both the frontend and the backend remove data from the rings and
> place
> > it in their own internal data structures eagerly. This is in
contrast
> to
> > the netchannel where free buffers and TX packets were left waiting
on
> > the rings until they were required. Use of the eager approach
enables
> > control messages to be muxed over the same ring.  Both ends will
> > advertise the number of outstanding requests they're prepared to
> queue
> > internally using a message communicated over the ring, and will
error
> > attempts to queue more. Since the backend needs to copy the entries
> > before verification anyhow this minimal additional overhead.
> >
> >
> > Best,
> > Ian
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
> Jun Kamada
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.