[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [admin] [Pkg-xen-devel] [BUG] task jbd2/xvda4-8:174 blocked for more than 120 seconds.




On 02/12/2019 06:10 AM, Samuel Thibault wrote:
> Hans van Kranenburg, le lun. 11 févr. 2019 22:59:11 +0100, a ecrit:
>> On 2/11/19 2:37 AM, Dongli Zhang wrote:
>>>
>>> On 2/10/19 12:35 AM, Samuel Thibault wrote:
>>>>
>>>> Hans van Kranenburg, le sam. 09 févr. 2019 17:01:55 +0100, a ecrit:
>>>>>> I have forwarded the original mail: all VM I/O get stuck, and thus the
>>>>>> VM becomes unusable.
>>>>>
>>>>> These are in many cases the symptoms of running out of "grant frames".
>>>>
>>>> Oh!  That could be it indeed.  I'm wondering what could be monopolizing
>>>> them, though, and why +deb9u11 is affected while +deb9u10 is not.  I'm
>>>> afraid increasing the gnttab max size to 32 might just defer filling it
>>>> up.
>>>>
>>>>>   -# ./xen-diag  gnttab_query_size 5
>>>>>   domid=5: nr_frames=11, max_nr_frames=32
>>>>
>>>> The current value is 31 over max 32 indeed.
>>>
>>> Assuming this is grant v1, there are still 4096/8=512 grant references 
>>> available
>>> (32-31=1 frame available). I do not think the I/O hang can be affected by 
>>> the
>>> lack of grant entry.
>>
>> I suspect that 31 measurement was taken when the domU was not hanging yet.
> 
> Indeed, I didn't have the hanging VM offhand.  I have looked again, it's
> now at 33. We'll have to monitor to check that it doesn't continue just
> increasing.

If the max used to be 32 and the current is already 33, this indicates the grant
entries might be used up in the past before the max_nr_frames is tuned.

Dongli ZHang

> 
> Samuel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.