[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] xvd Device Performance


  • To: <alan@xxxxxxxxxxxxxx>
  • From: "netz-haut - stephan seitz" <s.seitz@xxxxxxxxxxxx>
  • Date: Thu, 23 Feb 2012 09:51:17 +0100
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 23 Feb 2012 08:52:56 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: AczyCE9ziJ8PJVjfROaE30I+LczH+w==
  • Thread-topic: [Xen-users] xvd Device Performance


>
> Has anyone been able to get near native disk performance out of a xvdX
> device? The xvdX device maps to a LV disk partition.
>

This depends on the type of VM you're running. If the particular VM is
PV or at least utilizing PV drivers, your disk performance shouldn't
differ that much. Indeed, if your box runs a few more VMs it's obvious
that the sum of all VM I/O can't exceed your hardware capabilities.

Besides that hdparm isn't the best way to get reliable results, it looks
like you're getting roughly half the performance. This could be an issue
of partition alignment. Check the man page of your dom0's lvm if it
tries to auto-align lvs (this is a relatively new feature). If not, you
need to use a calculator and modify the offset values by hand.
Additionally the partitiontable (if partitioned) inside your xvda also
matters and need to respect the underlying blocksizes.

There are a few easier ways to get more IOPS (deadline scheduler,
filesystem tweaks, some /sys/block/* tweaks) but none of these ways are
able to outperform a correct block alignment.

I'll try to explain with some ASCII Art ;)

-------------------------------------------------------------------
|<-block ->|          |          |          |          | dom0 sda |
-------------------------------------------------------------------
|                     |<- pv partition aligned to block|          |
-------------------------------------------------------------------
|                     |<- lv upon the vg/pv also aligned          |
-------------------------------------------------------------------
|                     |<- xvda   |<- xvda1 partition aligned      |
-------------------------------------------------------------------
|                                |<- FS * ->| * respects blocksize|
-------------------------------------------------------------------

if there are additional layers below this examples "dom0 sda", e.g.
iSCSI initiator/iSCSI target/underlying storage you need to check
from the lowest level.

As well as this is not the easiest job to do, the result is worth
every minute ;)


cheers,

Stephan

>
> From the DomU:
>

>
> /dev/xvda:
>
> Timing cached reads:   22960 MB in  1.98 seconds = 11578.12 MB/sec
>
> Timing buffered disk reads:  152 MB in  3.01 seconds =  50.46 MB/sec
>

>
> The disk that the xvda links to:
>

>
> /dev/sdd:
>
> Timing cached reads:   22992 MB in  1.98 seconds = 11600.27 MB/sec
>
> Timing buffered disk reads:  308 MB in  3.01 seconds = 102.20 MB/sec
>

>
> Regards,
>
> Alan
>
>
> ****************************************************************************
>            Checked by MailWasher server (www.Firetrust.com)
>                 WARNING. No FirstAlert account found.
>              To reduce spam further activate FirstAlert.
>    This message can be removed by purchasing a FirstAlert Account.
> ****************************************************************************
>
>
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.