[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Best Practices for PV Disk IO?


  • To: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
  • From: Christopher Chen <muffaleta@xxxxxxxxx>
  • Date: Mon, 20 Jul 2009 20:18:04 -0700
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 20 Jul 2009 20:18:50 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=q/b+V4YaztfTtXXAKPmibGIPGqUSWkkn+P04s1iHmjINwRT7qo2941voaeLQxtUTi7 3Hs6IiA0lYHwjWjrLgN1p1dhFiGLWrkH/oea0TIcAINLOqfn4aOs2YjwaW7IJdXJGVYD 89zY9Oc4OMf+JxmDu4L2khjRBhwKZxrdxskuc=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Mon, Jul 20, 2009 at 7:25 PM, Jeff Sturm<jeff.sturm@xxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
>> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Christopher Chen
>> Sent: Monday, July 20, 2009 8:26 PM
>> To: xen-users@xxxxxxxxxxxxxxxxxxx
>> Subject: [Xen-users] Best Practices for PV Disk IO?
>>
>> I was wondering if anyone's compiled a list of places to look to
>> reduce Disk IO Latency for Xen PV DomUs. I've gotten reasonably
>> acceptable performance from my setup (Dom0 as a iSCSI initiator,
>> providing phy volumes to DomUs), at about 45MB/sec writes, and
>> 80MB/sec reads (this is to a IET target running in blockio mode).
>
> For domU hosts, xenblk over phy: is the best I've found.  I can get
> 166MB/s read performance from domU with O_DIRECT and 1024k blocks.
>
> Smaller block sizes yield progressively lower throughput, presumably due
> to read latency:
>
> 256k: 131MB/s
> 64k:    71MB/s
> 16k:    33MB/s
> 4k:     10MB/s
>
> Running the same tests on dom0 against the same block device yields only
> slightly faster throughput.
>
> If there's any additional magic to boost disk I/O under Xen, I'd like to
> hear it too.  I also pin my dom0 to an unused CPU so it is always
> available.  My shared block storage runs the AoE protocol over a pair of
> 1GbE links.
>
> The good news is that there doesn't seem to be much I/O penalty imposed
> by the hypervisor, so the domU hosts typically enjoy better disk I/O
> than an inexpensive server with a pair of SATA disks, at far less cost
> than the interconnects needed to couple a high-performance SAN to many
> individual hosts.  Overall, the performance seems like a win for Xen
> virtualization.
>
> Jeff

Jeff:

That sounds about right. Those numbers I quoted were from a iozone
latency test with 64k block sizes--80 is very close to your 71!

I found that increasing readahead (to a point) really helps get me to
80MB/sec reads, and using a low nr_requests (in linux DomU) seems to
influence the scheduler (cfq on the domU) to dispatch writes (up to
50MB/sec) faster, increasing write speed.

Of course, on the Dom0, I see 110MB/sec writes and reads on the same
block device at 64k.

But yeah, I'd love to hear what other people are doing...

Cheers!

cc

-- 
Chris Chen <muffaleta@xxxxxxxxx>
"The fact that yours is better than anyone else's
is not a guarantee that it's any good."
-- Seen on a wall

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.