[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Disk performance on guest incredibly low.



Hi Jaap,

We mostly use the stock 4.9 kernels for Stretch. We currently do have two new Dell PE R740 servers which require at least 4.11 kernel and they now run on a 4.16 kernel of backports.
But I think you need to disable the Multi queue block mode at boot time of Dom0  and DomU using the kernel boot setting:
scsi_mod.use_blk_mq=0

I am not sure (have not tested anything about that) if multi-queue block mode propagates from Dom0 to DomU automatically.
 
Best regards,
 
 Roalt Zijlstra
  Teamleader Infra & Deliverability
   
 roalt.zijlstra@xxxxxxxxxxx
 +31 342 423 262
 roalt.zijlstra
 https://www.webpower-group.com
 
 
Facebook Twitter Linkedin
Barcelona | Barneveld | Beijing | Chengdu | Guangzhou
Hamburg | Shanghai | Shenzhen | Stockholm
 


Op di 18 dec. 2018 om 16:43 schreef Jaap Gordijn <jaap@xxxxxxxxxxx>:

Hi,

 

I just run newer kernels because I hoped it would improve the IO performance, which apparently is not the case.

Which kernel versions are you running? Then I can try those, to see if that solves the problem.

 

Best,

 

-- Jaap

 

Van: Xen-users <xen-users-bounces@xxxxxxxxxxxxxxxxxxxx> Namens Roalt Zijlstra | webpower
Verzonden: Tuesday, 18 December 2018 16:12
Aan: Håkon Alstadheim <hakon@xxxxxxxxxxxxxxxxxx>
CC: xen-users@xxxxxxxxxxxxxxxxxxxx
Onderwerp: Re: [Xen-users] Disk performance on guest incredibly low.

 

Hi,

 

You are definitely running newer kernels then I do.. But I tested that with a setting in the grub.cfg I can enable the multi-queue block mode by adding 'scsi_mod.use_blk_mq=1'.

I would think that if you use 'scsi_mod.use_blk_mq=0' you can disable the multi-queue block mode.

In general the multi-queue block modes should be better, but it is worth a try in combination with Xen to test the single queue block mode.

 

Best regards,

 

 

 

Roalt Zijlstra

 

 

Teamleader Infra & Deliverability

 

 

 

 

roalt.zijlstra@xxxxxxxxxxx

 

+31 342 423 262

 

roalt.zijlstra

 

https://www.webpower-group.com

 

 

 

Facebook

 

Twitter

 

Linkedin

 

Barcelona | Barneveld | Beijing | Chengdu | Guangzhou
Hamburg | Shanghai | Shenzhen | Stockholm

 

 

 

 

Op ma 17 dec. 2018 om 19:54 schreef Håkon Alstadheim <hakon@xxxxxxxxxxxxxxxxxx>:


Den 17.12.2018 10:32, skrev Roalt Zijlstra | webpower:
> Hey Håkon,
>
> One little tip on disk IO in Virtual hosts, is changing the FS
> scheduler from cfq to noop. In our Xen PV configs we add this to the
> xen.cfg files at the end:
>
> extra="clocksource=tsc elevator=noop"
>
> Especially the "elevator=noop" parameter is forcing all FS-es to the
> noop scheduler. In my experience that gave our Xenservers a pretty
> nice boost.
> Red Hat does recoomend this for all Virtual servers (see the link
> below). To test this you don't need to reboot at all.
>
> For example from the link below there is a code snippet about getting
> and setting the scheduler for /dev/hda. (Replace hda with sdb or any
> other block device)
>
> # cat /sys/block/hda/queue/scheduler
> noop anticipatory deadline [cfq]
>
> # echo 'noop' > /sys/block/hda/queue/scheduler
> # cat /sys/block/hda/queue/scheduler
> [noop] anticipatory deadline cfq
>
> More info: https://access.redhat.com/solutions/5427

Yes, found that resource some time ago. Tried again now, no
break-through. I've got 'none' available as scheduler, rather than noop,
but they should be equal. No matter what I do (in dom0 and/or domu) I
get at least 10 x higher speed in the dom0. :-/ .

Example:

###

###In dom0, md-raid on drives f j g h i k. echo-and-cat does the obvious.

### (I usually run with mq-deadline as scheduler. seems to give
marginally better performance)

# for f in f j g h i k ; do echo-and-cat
/sys/block/sd${f}/queue/scheduler;done
/sys/block/sdf/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdj/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdg/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdh/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdi/queue/scheduler
[none] mq-deadline kyber bfq
/sys/block/sdk/queue/scheduler
[none] mq-deadline kyber bfq
# mount /dev/disk/by-label/SAS-STEAM /mnt/tull
# cd /mnt/tull/tmp
# df -hT ./
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md2p8     ext4  196G  164G   23G  88% /mnt/tull
# dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.56845 s, 235 MB/s
0:root@gentoo tmp # dd if=/dev/zero of=a_file bs=1M count=1024
conv=fsync oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.52557 s, 143 MB/s
###

### Booting domu with root and /tmp on SAS-STEAM, ssh into domu

###

# cd /tmp
# df -hT ./
Filsystem      Type Størrelse Brukt Tilgj. Bruk% Montert på
/dev/xvdb      ext4      196G  163G    24G   88% /
# cat /sys/block/xvdb/
alignment_offset   device/            holders/ power/            
ro                 subsystem/
bdi/               discard_alignment  inflight queue/            
size               trace/
capability         ext_range          integrity/ range             
slaves/            uevent
dev                hidden             mq/ removable          stat
# cat /sys/block/xvdb/queue/scheduler
[none] mq-deadline
# cd /tmp
# df -hT ./
Filsystem      Type Størrelse Brukt Tilgj. Bruk% Montert på
/dev/xvdb      ext4      196G  163G    24G   88% /
# cat /sys/block/xvdb/
alignment_offset   device/            holders/ power/            
ro                 subsystem/
bdi/               discard_alignment  inflight queue/            
size               trace/
capability         ext_range          integrity/ range             
slaves/            uevent
dev                hidden             mq/ removable          stat
# cat /sys/block/xvdb/queue/scheduler
[none] mq-deadline
# dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 oppføringer inn
1024+0 oppføringer ut
1073741824 byte (1,1 GB, 1,0 GiB) kopiert, 64,2402 s, 16,7 MB/s
# dd if=/dev/zero of=a_file bs=1M count=1024 conv=fsync oflag=direct
1024+0 oppføringer inn
1024+0 oppføringer ut
1073741824 byte (1,1 GB, 1,0 GiB) kopiert, 59,7517 s, 18,0 MB/s
#



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.