[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] xen-netback: switch to per-cpu scratch space



On 28/05/13 14:18, Konrad Rzeszutek Wilk wrote:
> On Mon, May 27, 2013 at 12:29:42PM +0100, Wei Liu wrote:
>> There are maximum nr_onlie_cpus netback threads running. We can make use
>> of per-cpu scratch space to reduce the size of buffer space when we move
>> to 1:1 model.
>>
>> In the unlikely event when per-cpu scratch space is not available,
>> processing routines will refuse to run on that CPU.
[...]
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
[...]
>> +                    printk(KERN_ALERT
>> +                           "xen-netback: "
>> +                           "CPU %d scratch space is not available,"
>> +                           " not doing any TX work for netback/%d\n",
>> +                           smp_processor_id(),
>> +                           (int)(netbk - xen_netbk));
> 
> So ... are you going to retry it? Drop it? Can you include in the message the
> the mechanism by which you are going to recover?
> 
[...]
>> +                           "xen-netback: "
>> +                           "CPU %d scratch space is not available,"
>> +                           " not doing any RX work for netback/%d\n",
>> +                           smp_processor_id(),
>> +                           (int)(netbk - xen_netbk));
> 
> And can you explain what the recovery mechanism is?

There isn't any recovery mechanism at the moment. If the scratch space
was not allocated then any netback thread may end up being unable to do
any work indefinitely (if the scheduler repeatedly schedules them on the
VCPU with no scratch space).

This is an appalling failure mode.

I also don't think there is a sensible way to recover.  We do not want
hotplugging of a VCPU to break or degrade the behaviour of existing VIFs.

The meta data is 12 * 512 = 6144 and the grant table ops is 24 * 512 =
12288.  This works out to 6 pages total.  I think we can spare 6 pages
per VIF and just have per-thread scratch space.

You may also want to consider a smaller batch size instead of allowing
for 2x ring size.  How often do you need this many entries?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.