[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Disk performance on guest incredibly low.




Den 17.12.2018 10:32, skrev Roalt Zijlstra | webpower:
Hey Håkon,

One little tip on disk IO in Virtual hosts, is changing the FS scheduler from cfq to noop. In our Xen PV configs we add this to the xen.cfg files at the end:

extra="clocksource=tsc elevator=noop"

Especially the "elevator=noop" parameter is forcing all FS-es to the noop scheduler. In my experience that gave our Xenservers a pretty nice boost. Red Hat does recoomend this for all Virtual servers (see the link below). To test this you don't need to reboot at all.

For example from the link below there is a code snippet about getting and setting the scheduler for /dev/hda. (Replace hda with sdb or any other block device)

# cat /sys/block/hda/queue/scheduler
noop anticipatory deadline [cfq]

# echo 'noop' > /sys/block/hda/queue/scheduler
# cat /sys/block/hda/queue/scheduler
[noop] anticipatory deadline cfq

More info: https://access.redhat.com/solutions/5427

Yes, found that resource some time ago. Tried again now, no break-through. I've got 'none' available as scheduler, rather than noop, but they should be equal. No matter what I do (in dom0 and/or domu) I get at least 10 x higher speed in the dom0. :-/ .

Example:

###

###In dom0, md-raid on drives f j g h i k. echo-and-cat does the obvious.

### (I usually run with mq-deadline as scheduler. seems to give marginally better performance)

# for f in f j g h i k ; do echo-and-cat /sys/block/sd${f}/queue/scheduler;done
/sys/block/sdf/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdj/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdg/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdh/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdi/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdk/queue/scheduler
[none] mq-deadline kyber bfq
# mount /dev/disk/by-label/SAS-STEAM /mnt/tull
# cd /mnt/tull/tmp
# df -hT ./
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md2p8     ext4  196G  164G   23G  88% /mnt/tull
# dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.56845 s, 235 MB/s
0:root@gentoo tmp # dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.52557 s, 143 MB/s
###

### Booting domu with root and /tmp on SAS-STEAM, ssh into domu

###

# cd /tmp
# df -hT ./
Filsystem      Type Størrelse Brukt Tilgj. Bruk% Montert på
/dev/xvdb      ext4      196G  163G    24G   88% /
# cat /sys/block/xvdb/
alignment_offset   device/            holders/ power/             ro                 subsystem/ bdi/               discard_alignment  inflight queue/             size               trace/ capability         ext_range          integrity/ range              slaves/            uevent
dev                hidden             mq/ removable          stat
# cat /sys/block/xvdb/queue/scheduler
[none] mq-deadline
# cd /tmp
# df -hT ./
Filsystem      Type Størrelse Brukt Tilgj. Bruk% Montert på
/dev/xvdb      ext4      196G  163G    24G   88% /
# cat /sys/block/xvdb/
alignment_offset   device/            holders/ power/             ro                 subsystem/ bdi/               discard_alignment  inflight queue/             size               trace/ capability         ext_range          integrity/ range              slaves/            uevent
dev                hidden             mq/ removable          stat
# cat /sys/block/xvdb/queue/scheduler
[none] mq-deadline
# dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 oppføringer inn
1024+0 oppføringer ut
1073741824 byte (1,1 GB, 1,0 GiB) kopiert, 64,2402 s, 16,7 MB/s
# dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 oppføringer inn
1024+0 oppføringer ut
1073741824 byte (1,1 GB, 1,0 GiB) kopiert, 59,7517 s, 18,0 MB/s
#





_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.