[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Disk performance on guest incredibly low.



Hey Håkon,

One little tip on disk IO in Virtual hosts, is changing the FS scheduler from cfq to noop. In our Xen PV configs we add this to the xen.cfg files at the end:

extra="clocksource=tsc elevator=noop"

Especially the "elevator=noop" parameter is forcing all FS-es to the noop scheduler. In my experience that gave our Xenservers a pretty nice boost.
Red Hat does recoomend this for all Virtual servers (see the link below). To test this you don't need to reboot at all.

For example from the link below there is a code snippet about getting and setting the scheduler for /dev/hda. (Replace hda with sdb or any other block device)

# cat /sys/block/hda/queue/scheduler
noop anticipatory deadline [cfq]

# echo 'noop' > /sys/block/hda/queue/scheduler
# cat /sys/block/hda/queue/scheduler
[noop] anticipatory deadline cfq


Best regards,
 
 Roalt Zijlstra
  Teamleader Infra & Deliverability
   
 roalt.zijlstra@xxxxxxxxxxx
 +31 342 423 262
 roalt.zijlstra
 https://www.webpower-group.com
 
 
Facebook Twitter Linkedin
Barcelona | Barneveld | Beijing | Chengdu | Guangzhou
Hamburg | Shanghai | Shenzhen | Stockholm
 


Op zo 16 dec. 2018 om 20:18 schreef Håkon Alstadheim <hakon@xxxxxxxxxxxxxxxxxx>:

Den 16.12.2018 17:42, skrev frm@xxxxxxxxxxx:
> My disk performance of Xen guest is very slow. My latest configuration:
>
> At guest, I have:
>
> `    dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
>      1024+0 records in
>      1024+0 records out
>      1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.6421 s, 68.6 MB/s
> `
>
> At Dom0 I have:
>
> `    dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
>      1024+0 records in
>      1024+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.81855 s, 281 MB/s
> `
>
> The dom0 is about 5 times faster than the guest.
>
Worse here, 15 to 20 x slower (I did a few runs just now, numbers vary
by quite a lot, but the magnitude difference is consistent). Same md
partition mounted on dom0 and as domu root. Md constituents are SAS
spinning rust. In the past I've tried looking at tunables for the domu
disks, to no effect. I would love a "domu disk tuning for dummies" .
Throughput speed is tolerable usually, but when a rebuild is running on
the disk-array, the domus are all but unusable interactively. I've been
poking at this issue for a couple of years, not getting anywhere. The
poking has been totally devoid of proper notes and a strategy, but over
time I have ditched LVM and tried various schedulers, triple-checked
alignment, used both xfs and ext4, all without ever seeing a definitive
break-through. I had bcache with an m2 ssd on top of LVM for a while
until my m2 wore out. That helped paper over the issue. (bcache on top
of virtual devices is not recommended, btw. If done, make sure you use
no-discard on the disk-configuration for the vm)

Xentop:

xentop - 20:06:04   Xen 4.11.1
      NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%) VCPUS
   Domain-0 -----r      33821    8.1    4995856    7.5 8
  steam.hvm --b---         51    0.3    7332136   11.0 6

Guest steam:

# df -hT  /tmp
Filsystem      Type Størrelse Brukt Tilgj. Bruk% Montert på
/dev/xvdb      ext4      196G  162G    25G   87% /
# cd /tmp
# dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 oppføringer inn
1024+0 oppføringer ut
1073741824 byte (1,1 GB, 1,0 GiB) kopiert, 60,1407 s, 17,9 MB/s

Dom0:

  # df -hT ./
Filesystem     Type  Size  Used Avail Use% Mounted on

/dev/md2p8     ext4  196G  163G   24G  88% /mnt/tull

# dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.37238 s, 200 MB/s



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.