[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen-netfront: Improve error handling during initialization



On 02/02/2017 09:54 AM, Ross Lagerwall wrote:
> On 02/01/2017 06:54 PM, Boris Ostrovsky wrote:
>> On 02/01/2017 10:50 AM, Ross Lagerwall wrote:
>>> Improve error handling during initialization. This fixes a crash when
>>> running out of grant refs when creating many queues across many
>>> netdevs.
>>>
>>> * Delay timer creation so that if initializing a queue fails, the timer
>>> has not been setup yet.
>>> * If creating queues fails (i.e. there are no grant refs available),
>>> call xenbus_dev_fatal() to ensure that the xenbus device is set to the
>>> closed state.
>>> * If no queues are created, don't call xennet_disconnect_backend as
>>> netdev->real_num_tx_queues will not have been set correctly.
>>> * If setup_netfront() fails, ensure that all the queues created are
>>> cleaned up, not just those that have been set up.
>>> * If any queues were set up and an error occurs, call
>>> xennet_destroy_queues() to stop the timer and clean up the napi
>>> context.
>>
>> We need to stop the timer in xennet_disconnect_backend(). I sent a patch
>> a couple of day ago
>>
>> https://lists.xenproject.org/archives/html/xen-devel/2017-01/msg03269.html
>>
>>
>> but was about to resend it with del_timer_sync() moved after
>> napi_synchronize().
>>
>
> OK, but the patch is still relevant since I believe we still need to
> clean up the napi context in this case (plus the patch fixes a lot of
> other issues).

I was only commenting on that specific bullet in the commit message, I
am not arguing against the patch.

>
> But I will respin it on top of your patch(es) and re-test it before
> resending.
>

You can re-test with the patch in the link above, I will not be
re-sending new version.

Thanks.
-boris




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.