[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC 21/23] net/xen-netback: Make it running on 64KB page granularity
On 20/05/15 09:26, Wei Liu wrote: > On Tue, May 19, 2015 at 11:56:39PM +0100, Julien Grall wrote: > >> >>>> diff --git a/drivers/net/xen-netback/common.h >>>> b/drivers/net/xen-netback/common.h >>>> index 0eda6e9..c2a5402 100644 >>>> --- a/drivers/net/xen-netback/common.h >>>> +++ b/drivers/net/xen-netback/common.h >>>> @@ -204,7 +204,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ >>>> /* Maximum number of Rx slots a to-guest packet may use, including the >>>> * slot needed for GSO meta-data. >>>> */ >>>> -#define XEN_NETBK_RX_SLOTS_MAX (MAX_SKB_FRAGS + 1) >>>> +#define XEN_NETBK_RX_SLOTS_MAX ((MAX_SKB_FRAGS + 1) * XEN_PFN_PER_PAGE) >>>> >>>> enum state_bit_shift { >>>> /* This bit marks that the vif is connected */ >>>> >>>> The function xenvif_wait_for_rx_work never returns. I guess it's because >>>> there >>>> is not enough slot available. >>>> >>>> For 64KB page granularity we ask for 16 times more slots than 4KB page >>>> granularity. Although, it's very unlikely that all the slot will be used. >>>> >>>> FWIW I pointed out the same problem on blkfront. >>>> >>> >>> This is not going to work. The ring in netfront / netback has only 256 >>> slots. Now you ask for netback to reserve more than 256 slots -- (17 + >>> 1) * (64 / 4) = 288, which can never be fulfilled. See the call to >>> xenvif_rx_ring_slots_available. >>> >>> I think XEN_NETBK_RX_SLOTS_MAX derived from the fact the each packet to >>> the guest cannot be larger than 64K. So you might be able to >>> >>> #define XEN_NETBK_RX_SLOTS_MAX ((65536 / XEN_PAGE_SIZE) + 1) >> >> I didn't know that packet cannot be larger than 64KB. That's simply a lot >> the problem. >> > > I think about this more, you will need one more slot for GSO > information, so make it ((65536 / XEN_PAGE_SIZE) + 1 + 1). I have introduced a XEN_MAX_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1) because it's required in another place. Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |