[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Blktap: Userspace file-based image support. (RFC)

  • To: "Rusty Russell" <rusty@xxxxxxxxxxxxxxx>
  • From: "Andrew Warfield" <andrew.warfield@xxxxxxxxxxxx>
  • Date: Thu, 29 Jun 2006 07:34:13 -0700
  • Cc: Xen Developers <xen-devel@xxxxxxxxxxxxxxxxxxx>, Julian Chesterfield <julian.chesterfield@xxxxxxxxxxxx>
  • Delivery-date: Thu, 29 Jun 2006 07:34:35 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=XmPIH+EZboIEQe/TGREKTA+48blqXD7qbFIImGb3aEGFx0RzrURyZUxsvdblRhr3R+E8BAZ2271xFJ6fr08vqLeR7aW7MZhXARDBwlNDXjJzErus1IAb2lMMnVDCBJVHbH7v8dVFSk2ES8QMptllRKdrZgLTOo1qbPWx44oo7HE=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Last I looked the blkif front end, it uses a noop I/O scheduler, which
means that the only one doing scheduling is the backend.  I can easily
imagine that if the backend is synchronous, this would be slow.

However, it's not clear to me that doing scheduling in the backend will
generally be faster than doing it in the front end.  I suppose it should
be, if the backend domain were serving multiple frontends from the same

Well, scheduling across multiple VM request streams is certainly one
reason for exposing as big a request aperture to the physical
(backend) disk scheduler as possible.  The fact that the frontend
doesn't necessarily have any idea how its blocks are actually laid out
on the disk is another -- in the case of file-backed images for

>  AIO just
> lets me issue batches of requests at once, and so minimizes context
> switching through userland -- which was something I was worried about
> causing overhead on x86_64.  I don't really think it adds that much
> complexity.

Sure, I would have used a pool of processes because I'm old-fashioned,
but AIO is probably a better choice for multiple requests at once.

My older code was written without the benefit of working AIO for xen
linux.  I knocked up a thread pool to improve performance and it
worked reasonably well, although I found that you needed a fairly
large number of threads to saturate the disk (with blocking i/o, which
was a little naive ;) ), and it represented a fairly large chunk of
unnecessary moving parts.

The linux libaio stuff is pretty good actually.  Requests map rather
directly down onto the kernel bio interface, so with aio the userland
block back code is doing a very similar thing to the in-kernel driver.
As Anthony points out, libaio is unthreaded, you just fill out a
batch of request structs and shove it down.  It's very fast indeed and
quite low-overhead.  My only real complaint is that despite a couple
of years discussing ways to do it on libaio-devel, the AIO developers
haven't settled a unified way to pool on aio completions and normal
file handles, which is a bit of an inconvenience when you want to do


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.