[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next] xen-netfront: avoid packet loss when ethernet header crosses page boundary



David Vrabel <david.vrabel@xxxxxxxxxx> writes:

> On 22/08/16 16:42, Vitaly Kuznetsov wrote:
>> 
>> I see two ways to fix the issue:
>> - Change the 'wire' protocol between netfront and netback to start keeping
>>   the original SKB structure. We'll have to add a flag indicating the fact
>>   that the particular request is a part of the original linear part and not
>>   a frag. We'll need to know the length of the linear part to pre-allocate
>>   memory.
>
> I don't think there needs to be a protocol change.  I think the check in
> netback is bogus -- it's the total packet length that must be >
> HLEN_ETH.  The upper layers will pull any headers from the frags as
> needed

I'm afraid this is not always true, just removing the check leads us to
the following:

[  495.442186] kernel BUG at ./include/linux/skbuff.h:1927! 
[  495.468789] invalid opcode: 0000 [#1] SMP 
[  495.490094] Modules linked in: tun loop bridge stp llc intel_rapl sb_edac 
edac_core x86_pkg_temp_thermal ipmi_ssif igb coretemp iTCO_wdt crct10dif_pclmul 
crc32_pclmul ptp ipmi_si iTCO_vendor_support ghash_clmulni_intel hpwdt 
ipmi_msghandler ioatdma hpilo pps_core lpc_ich acpi_power_meter wmi fjes 
tpm_tis dca shpchp tpm_tis_core tpm nfsd auth_rpcgss nfs_acl lockd xenfs grace 
xen_privcmd sunrpc xfs libcrc32c mgag200 i2c_algo_bit drm_kms_helper ttm drm 
crc32c_intel serio_raw xen_scsiback target_core_mod xen_pciback xen_netback 
xen_blkback xen_gntalloc xen_gntdev xen_evtchn 
[  495.749431] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.8.0-rc3+ #2 
[  495.782648] Hardware name: HP ProLiant DL380e Gen8, BIOS P73 08/20/2012 
[  495.817578] task: ffffffff81c0d500 task.stack: ffffffff81c00000 
[  495.847805] RIP: e030:[<ffffffff816f68a0>]  [<ffffffff816f68a0>] 
eth_type_trans+0xf0/0x130 
[  495.888942] RSP: e02b:ffff880429203d70  EFLAGS: 00010297 
[  495.916005] RAX: 0000000000000014 RBX: ffff88041f7bf200 RCX: 
0000000000000000 
[  495.952133] RDX: ffff88041ed76c40 RSI: ffff88041ad6b000 RDI: 
ffff88041f7bf200 
[  495.988919] RBP: ffff880429203d80 R08: 0000000000000000 R09: 
ffff88041ed76cf0 
[  496.025782] R10: 0000160000000000 R11: ffffc900041aa2f8 R12: 
000000000000000a 
[  496.061351] R13: ffffc900041b0200 R14: 000000000000000b R15: 
ffffc900041aa2a0 
[  496.098178] FS:  00007fa2b9442880(0000) GS:ffff880429200000(0000) 
knlGS:0000000000000000 
[  496.139767] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033 
[  496.169105] CR2: 00005558e4d43ea0 CR3: 000000042024e000 CR4: 
0000000000042660 
[  496.206816] Stack: 
[  496.216904]  000000000000000b 51859c5d87cdd22f ffff880429203e68 
ffffffffc002dd59 
[  496.254093]  ffffffff8155eed0 51859c5d87cdd22f ffff88041a450000 
0000000a22d66f70 
[  496.292351]  ffff88041a450000 ffffc900041ad9e0 ffffc900041aa3c0 
ffff88041f7bf200 
[  496.330823] Call Trace: 
[  496.343397]  <IRQ>  
[  496.352992]  [<ffffffffc002dd59>] xenvif_tx_action+0x569/0x8b0 [xen_netback] 
[  496.389933]  [<ffffffff8155eed0>] ? scsi_put_command+0x80/0xd0 
[  496.418810]  [<ffffffff816ccc07>] ? __napi_schedule+0x47/0x50 
[  496.449097]  [<ffffffffc00311f0>] ? xenvif_tx_interrupt+0x50/0x60 
[xen_netback] 
[  496.485804]  [<ffffffff81101bed>] ? __handle_irq_event_percpu+0x8d/0x190 
...

-- 
  Vitaly

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.