[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support



On 11/26/2015 10:57 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Nov 26, 2015 at 10:28:10AM +0800, Bob Liu wrote:
>>
>> On 11/26/2015 06:12 AM, Konrad Rzeszutek Wilk wrote:
>>> On Wed, Nov 25, 2015 at 03:56:03PM -0500, Konrad Rzeszutek Wilk wrote:
>>>> On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk wrote:
>>>>>>   xen/blkback: separate ring information out of struct xen_blkif
>>>>>>   xen/blkback: pseudo support for multi hardware queues/rings
>>>>>>   xen/blkback: get the number of hardware queues/rings from blkfront
>>>>>>   xen/blkback: make pool of persistent grants and free pages per-queue
>>>>>
>>>>> OK, got to those as well. I have put them in 'devel/for-jens-4.5' and
>>>>> are going to test them overnight before pushing them out.
>>>>>
>>>>> I see two bugs in the code that we MUST deal with:
>>>>>
>>>>>  - print_stats () is going to show zero values.
>>>>>  - the sysfs code (VBD_SHOW) aren't converted over to fetch data
>>>>>    from all the rings.
>>>>
>>>> - kthread_run can't handle the two "name, i" arguments. I see:
>>>>
>>>> root      5101     2  0 20:47 ?        00:00:00 [blkback.3.xvda-]
>>>> root      5102     2  0 20:47 ?        00:00:00 [blkback.3.xvda-]
>>>
>>> And doing save/restore:
>>>
>>> xl save <id> /tmp/A;
>>> xl restore /tmp/A;
>>>
>>> ends up us loosing the proper state and not getting the ring setup back.
>>> I see this is backend:
>>>
>>> [ 2719.448600] vbd vbd-22-51712: -1 guest requested 0 queues, exceeding the 
>>> maximum of 3.
>>>
>>> And XenStore agrees:
>>> tool = ""
>>>  xenstored = ""
>>> local = ""
>>>  domain = ""
>>>   0 = ""
>>>    domid = "0"
>>>    name = "Domain-0"
>>>    device-model = ""
>>>     0 = ""
>>>      state = "running"
>>>    error = ""
>>>     backend = ""
>>>      vbd = ""
>>>       2 = ""
>>>        51712 = ""
>>>         error = "-1 guest requested 0 queues, exceeding the maximum of 3."
>>>
>>> .. which also leads to a memory leak as xen_blkbk_remove never gets
>>> called.
>>
>> I think which was already fix by your patch:
>> [PATCH RFC 2/2] xen/blkback: Free resources if connect_ring failed.
> 
> Nope. I get that with or without the patch.
> 

Attached patch should fix this issue. 

-- 
Regards,
-Bob

Attachment: 0001-xen-blkfront-fix-compile-error.patch
Description: Text Data

Attachment: 0002-xen-blkfront-realloc-ring-info-in-blkif_resume.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.