[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network troubles "bisected"
Hrmm i don't know if it's your mailer or my mailer .. but i seem to get a lot of your mails truncated somehow :S though the xen-devel list archive seem to have them in complete form .. so it's probably my mailer tripping over something > I'll come up with some patches shortly. OK will test them ASAP. Thursday, March 27, 2014, 10:54:09 AM, you wrote: >> -----Original Message----- >> From: Sander Eikelenboom [mailto:linux@xxxxxxxxxxxxxx] >> Sent: 26 March 2014 20:18 >> To: Paul Durrant >> Cc: Wei Liu; annie li; Zoltan Kiss; xen-devel@xxxxxxxxxxxxx; Ian Campbell; >> linux- >> kernel; netdev@xxxxxxxxxxxxxxx >> Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network >> troubles "bisected" >> >> >> Wednesday, March 26, 2014, 7:15:30 PM, you wrote: >> >> >> -----Original Message----- >> >> From: Sander Eikelenboom [mailto:linux@xxxxxxxxxxxxxx] >> >> Sent: 26 March 2014 18:08 >> >> To: Paul Durrant >> >> Cc: Wei Liu; annie li; Zoltan Kiss; xen-devel@xxxxxxxxxxxxx; Ian Campbell; >> linux- >> >> kernel; netdev@xxxxxxxxxxxxxxx >> >> Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network >> >> troubles "bisected" >> >> >> >> >> >> Wednesday, March 26, 2014, 6:46:06 PM, you wrote: >> >> >> >> > Re-send shortened version... >> >> >> >> >> -----Original Message----- >> >> >> From: Sander Eikelenboom [mailto:linux@xxxxxxxxxxxxxx] >> >> >> Sent: 26 March 2014 16:54 >> >> >> To: Paul Durrant >> >> >> Cc: Wei Liu; annie li; Zoltan Kiss; xen-devel@xxxxxxxxxxxxx; Ian >> >> >> Campbell; >> >> linux- >> >> >> kernel; netdev@xxxxxxxxxxxxxxx >> >> >> Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network >> >> >> troubles "bisected" >> >> >> >> >> > [snip] >> >> >> >> >> >> >> >> - When processing an SKB we end up in "xenvif_gop_frag_copy" >> while >> >> >> prod >> >> >> >> == cons ... but we still have bytes and size left .. >> >> >> >> - start_new_rx_buffer() has returned true .. >> >> >> >> - so we end up in get_next_rx_buffer >> >> >> >> - this does a RING_GET_REQUEST and ups cons .. >> >> >> >> - and we end up with a bad grant reference. >> >> >> >> >> >> >> >> Sometimes we are saved by the bell .. since additional slots have >> >> become >> >> >> >> free (you see cons become > prod in "get_next_rx_buffer" but >> shortly >> >> >> after >> >> >> >> that prod is increased .. >> >> >> >> just in time to not cause a overrun). >> >> >> >> >> >> >> >> >> >> > Ah, but hang on... There's a BUG_ON meta_slots_used > >> >> >> max_slots_needed, so if we are overflowing the worst-case calculation >> >> then >> >> >> why is that BUG_ON not firing? >> >> >> >> >> >> You mean: >> >> >> sco = (struct skb_cb_overlay *)skb->cb; >> >> >> sco->meta_slots_used = xenvif_gop_skb(skb, &npo); >> >> >> BUG_ON(sco->meta_slots_used > max_slots_needed); >> >> >> >> >> >> in "get_next_rx_buffer" ? >> >> >> >> >> >> >> > That code excerpt is from net_rx_action(),isn't it? >> >> >> >> Yes >> >> >> >> >> I don't know .. at least now it doesn't crash dom0 and therefore not my >> >> >> complete machine and since tcp is recovering from a failed packet :-) >> >> >> >> >> >> >> > Well, if the code calculating max_slots_needed were underestimating >> then >> >> the BUG_ON() should fire. If it is not firing in your case then this >> >> suggests >> >> your problem lies elsewhere, or that meta_slots_used is not equal to the >> >> number of ring slots consumed. >> >> >> >> It's seem to be the last .. >> >> >> >> [ 1157.188908] vif vif-7-0 vif7.0: ?!? xenvif_gop_skb Me here 5 npo- >> >> >meta_prod:40 old_meta_prod:36 vif->rx.sring->req_prod:2105867 vif- >> >> >rx.req_cons:2105868 meta->gso_type:1 meta->gso_size:1448 nr_frags:1 >> >> req->gref:657 req->id:7 estimated_slots_needed:4 j(data):1 >> >> reserved_slots_left:-1 used in funcstart: 0 + 1 .. used_dataloop:1 .. >> >> used_fragloop:3 >> >> [ 1157.244975] vif vif-7-0 vif7.0: ?!? xenvif_rx_action me here 2 .. vif- >> >> >rx.sring->req_prod:2105867 vif->rx.req_cons:2105868 sco- >> >> >meta_slots_used:4 max_upped_gso:1 skb_is_gso(skb):1 >> >> max_slots_needed:4 j:6 is_gso:1 nr_frags:1 firstpart:1 secondpart:2 >> >> reserved_slots_left:-1 >> >> >> >> net_rx_action() calculated we would need 4 slots .. and sco- >> >> >meta_slots_used == 4 when we return so it doesn't trigger you BUG_ON >> .. >> >> >> >> The 4 slots we calculated are: >> >> 1 slot for the data part: DIV_ROUND_UP(offset_in_page(skb->data) + >> >> skb_headlen(skb), PAGE_SIZE) >> >> 2 slots for the single frag in this SKB from: DIV_ROUND_UP(size, >> PAGE_SIZE) >> >> 1 slot since GSO >> >> >> >> In the debug code i annotated all cons++, and the code uses 1 slot to >> process >> >> the data from the SKB as expected but uses 3 slots in the frag chopping >> loop. >> >> And when it reaches the state were cons > prod it is always in >> >> "get_next_rx_buffer". >> >> >> >> >> But probably because "npo->copy_prod++" seems to be used for the >> >> frags .. >> >> >> and it isn't added to npo->meta_prod ? >> >> >> >> >> >> >> > meta_slots_used is calculated as the value of meta_prod at return >> (from >> >> xenvif_gop_skb()) minus the value on entry , >> >> > and if you look back up the code then you can see that meta_prod is >> >> incremented every time RING_GET_REQUEST() is evaluated. >> >> > So, we must be consuming a slot without evaluating >> RING_GET_REQUEST() >> >> and I think that's exactly what's happening... >> >> > Right at the bottom of xenvif_gop_frag_copy() req_cons is simply >> >> incremented in the case of a GSO. So the BUG_ON() is indeed off by one _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |