[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net] xen-netback: bookkeep number of queues in our own module



On Wed, Jun 18, 2014 at 10:18:50AM -0400, Boris Ostrovsky wrote:
> On 06/18/2014 10:09 AM, Wei Liu wrote:
> >The original code uses netdev->real_num_tx_queues to bookkeep number of
> >queues and invokes netif_set_real_num_tx_queues to set the number of
> >queues. However, netif_set_real_num_tx_queues doesn't allow
> >real_num_tx_queues to be smaller than 1, which means setting the number
> >to 0 will not work and real_num_tx_queues is untouched.
> >
> >This is bogus when xenvif_free is invoked before any number of queues is
> >allocated. That function needs to iterate through all queues to free
> >resources. Using the wrong number of queues results in NULL pointer
> >dereference.
> >
> >So we bookkeep the number of queues in xen-netback to solve this
> >problem. The usage of real_num_tx_queues in core driver is to cap queue
> >index to a valid value. In start_xmit we've already guarded against out
> >of range queue index so we should be fine.
> >
> >This fixes a regression introduced by multiqueue patchset in 3.16-rc1.
> 
> 
> David sent a couple of patches earlier today that I have been testing and
> they appear to fix both netfront and netback. (I am waiting for 32-bit to
> finish)
> 
> http://lists.xenproject.org/archives/html/xen-devel/2014-06/msg02308.html
> 

I saw that, but they don't fix this backend bug. Try crashing the guest
before it connects to backend. As I said in commit message:

> >This is bogus when xenvif_free is invoked before any number of queues is
> >allocated. That function needs to iterate through all queues to free

netif_set_real_num_tx_queues will need to be removed anyway.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.