[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/8] netfront: multi-page ring support





On 2013-2-27 23:49, Wei Liu wrote:
On Wed, 2013-02-27 at 07:39 +0000, ANNIE LI wrote:
On 2013-2-26 20:35, Wei Liu wrote:
On Tue, 2013-02-26 at 06:52 +0000, ANNIE LI wrote:
On 2013-2-16 0:00, Wei Liu wrote:
Signed-off-by: Wei Liu<wei.liu2@xxxxxxxxxx>
---
    drivers/net/xen-netfront.c |  246 
+++++++++++++++++++++++++++++++-------------
    1 file changed, 174 insertions(+), 72 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 8bd75a1..de73a71 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -67,9 +67,19 @@ struct netfront_cb {

    #define GRANT_INVALID_REF   0

-#define NET_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE)
-#define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
-#define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
+#define XENNET_MAX_RING_PAGE_ORDER XENBUS_MAX_RING_PAGE_ORDER
+#define XENNET_MAX_RING_PAGES      (1U<<    XENNET_MAX_RING_PAGE_ORDER)
+
+
+#define NET_TX_RING_SIZE(_nr_pages)                  \
+     __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE * (_nr_pages))
+#define NET_RX_RING_SIZE(_nr_pages)                  \
+     __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE * (_nr_pages))
+
+#define XENNET_MAX_TX_RING_SIZE NET_TX_RING_SIZE(XENNET_MAX_RING_PAGES)
+#define XENNET_MAX_RX_RING_SIZE NET_RX_RING_SIZE(XENNET_MAX_RING_PAGES)
+
+#define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE(1), 256)
Not using multi-page ring here?
In xennet_create_dev, gnttab_alloc_grant_references allocates
TX_MAX_TARGET number of grant reference for tx. In
xennet_release_tx_bufs, NET_TX_RING_SIZE(np->tx_ring_pages) numbers of
grants are processed. And NET_RX_RING_SIZE(np->tx_ring_pages) is totally
different from TX_MAX_TARGET if np->rx_ring_pages is not 1. Although
skb_entry_is_link helps to not release invalid grants, lots of null loop
seems unnecessary. I think TX_MAX_TARGET should be changed into some
variableconnected with np->tx_ring_pages. Or you intended to use one
page ring here?

Looking back my history, this limitation was introduced because if we
have a multi-page backend and single page frontend, the backend skb
processing could overlap.
I did not see the overlap you mentioned here in netback. Although
netback supports multi-page, netback->vif still uses single page if the
frontend only supports single page. Netfront and netback negotiate this
through xenstore in your 5/8 patch. The requests and response should not
have any overlap between netback and netfront. Am I missing something?

I tried to dig up mail archive just now and realized that the bug report
was in private mail exchange with Konrad.

I don't really remember the details now since it is more than one year
old, but you can find trace in Konrad's tree, CS 5b4c3dd5b255. All I can
remember is that this bug was triggered by mixed old/new
frontend/backend.

I checked the code in Konrad's tree and am thinking this overlap issue you mentioned existing in original netback(without multi-ring) and newer netfront. Original netback does not support multi-ring, and your newer netfront before this bug fix used "#define TX_MAX_TARGET XENNET_MAX_TX_RING_SIZE" directly. So that would cause overlap when netfront allocating rx skbs. "#define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE(1), 256)" limits the netfront to single ring, it fixed the overlap issue, but not enough.


I think this cap can be removed if we make all buffers in netfront
dynamically allocated.

Yes, making TX_MAX_TARGET dynamically would fix this issue.

Thanks
Annie


Wei.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.