[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] File-based domU - Slow storage write since xen 4.8



On 08/02/17 18:29, Benoit Depail wrote:
> On 08/01/2017 11:48 AM, Roger Pau Monné wrote:
>> On Fri, Jul 28, 2017 at 04:50:27PM +0200, Benoit Depail wrote:
>>> On 07/26/17 00:25, Keith Busch wrote:
>>>> On Fri, Jul 21, 2017 at 07:07:06PM +0200, Benoit Depail wrote:
>>>>> On 07/21/17 18:07, Roger Pau Monné wrote:
>>>>>>
>>>>>> Hm, I'm not sure I follow either. AFAIK this problem came from
>>>>>> changing the Linux version in the Dom0 (where the backend, blkback is
>>>>>> running), rather than in the DomU right?
>>>>>>
>>>>>> Regarding the queue/sectors stuff, blkfront uses several blk_queue
>>>>>> functions to set those parameters, maybe there's something wrong
>>>>>> there, but I cannot really spot what it is:
>>>>>>
>>>>>> http://elixir.free-electrons.com/linux/latest/source/drivers/block/xen-blkfront.c#L929
>>>>>>
>>>>>> In the past the number of pages that could fit in a single ring
>>>>>> request was limited to 11, but some time ago indirect descriptors
>>>>>> where introduced in order to lift this limit, and now requests can
>>>>>> have a much bigger number of pages.
>>>>>>
>>>>>> Could you check the max_sectors_kb of the underlying storage you are
>>>>>> using in Dom0?
>>>>>>
>>>>>> Roger.
>>>>>>
>>>>> I checked the value for the loop device as well
>>>>>
>>>>> With 4.4.77 (bad write performance)
>>>>> $ cat /sys/block/sda/queue/max_sectors_kb
>>>>> 1280
>>>>> $ cat /sys/block/loop1/queue/max_sectors_kb
>>>>> 127
>>>>>
>>>>>
>>>>> With 4.1.42 (normal write performance)
>>>>> $ cat /sys/block/sda/queue/max_sectors_kb
>>>>> 4096
>>>>> $ cat /sys/block/loop1/queue/max_sectors_kb
>>>>> 127
>>>>
>>>> Thank you for the confirmations so far. Could you confirm performance dom0
>>>> running 4.4.77 with domU running 4.1.42, and the other way around? Would
>>>> like to verify if this is just isolated to blkfront.
>>>>
>>> Hi,
>>>
>>> I've ran the tests, and I can tell that the domU kernel version have no
>>> influence on the performance.
>>>
>>> Dom0 with 4.4.77 always shows bad performance, wether the domU runs
>>> 4.1.42 or 4.4.77.
>>>
>>> Dom0 with 4.1.42 always shows good performance, wether the domU runs
>>> 4.1.42 or 4.4.77.
>>
>> Hello,
>>
>> I haven't yet got time to look into this sadly. Can you please try to
>> use fio [0] in order to run the tests against the loop device in Dom0?
>>
>> If possible, could you test several combinations of block sizes, I/O
>> sizes and I/O depths?
>>
>> Thanks, Roger.
>>
>> [0] http://git.kernel.dk/?p=fio.git;a=summary
>>
> 
> Ok I'll give a try later when I have more time. Probably next week.
> 
> Thanks,
> 

Hi,

I had some time to play around with fio. I am not really sure what I was
supposed to do with it so I made a wild guess.

Using the setup showing bad write performances (dom0 with v4.4.77), I
ran a fio job with the following parameters :

[global]
size=5g
filename=<device>
direct=1
readwrite=write

[job1]
runtime=1m


I then used a loop to change the blocksize from 512 to 8192, steps 512,
and the io depth from 1 to 32, steps 4.

On the loop device (from the dom0), write speed was about 34MB/s most of
the time, reaching 110MB/s with a blocksize multiple of 4096 (4096 and
8192).

On the domU, write speed was the same, but reaching only 52MB/s when the
blocksize is a multiple of 4096.

Io depth did not show any significant change.

Feel free to suggest any improvment on my bench suit.

Thansk,

-- 
Benoit Depail
Senior Infrastructures Architect
NBS System

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.