[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next RESEND] xen-netfront: avoid packet loss when ethernet header crosses page boundary



On 19/09/16 11:22, Vitaly Kuznetsov wrote:
> David Miller <davem@xxxxxxxxxxxxx> writes:
> 
>> From: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
>> Date: Fri, 16 Sep 2016 12:59:14 +0200
>>
>>> @@ -595,6 +596,19 @@ static int xennet_start_xmit(struct sk_buff *skb, 
>>> struct net_device *dev)
>>>     offset = offset_in_page(skb->data);
>>>     len = skb_headlen(skb);
>>>  
>>> +   /* The first req should be at least ETH_HLEN size or the packet will be
>>> +    * dropped by netback.
>>> +    */
>>> +   if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) {
>>> +           nskb = skb_copy(skb, GFP_ATOMIC);
>>> +           if (!nskb)
>>> +                   goto drop;
>>> +           dev_kfree_skb_any(skb);
>>> +           skb = nskb;
>>> +           page = virt_to_page(skb->data);
>>> +           offset = offset_in_page(skb->data);
>>> +   }
>>> +
>>>     spin_lock_irqsave(&queue->tx_lock, flags);
>>
>> I think you also have to recalculate 'len' in this case too, as
>> skb_headlen() will definitely be different for nskb.
>>
>> In fact, I can't see how this code can work properly without that fix.
> 
> Thank you for your feedback David,
> 
> in my testing (even when I tried doing skb_copy() for all skbs
> unconditionally) skb_headlen(nskb) always equals 'len' so I was under an
> impression that both 'skb->len' and 'skb->data_len' remain the same when
> we do skb_copy(). However, in case you think there are cases when
> headlen changes, I see no problem with re-calculating 'len' as it won't
> bring any significant performace penalty compared to the already added
> skb_copy().

I think you can move the len = skb_headlen(skb) after the if, no need to
recalculate it.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.