[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 1/4] xen-netback: Factor queue-specific data into queue struct.



Subject:   Re: [PATCH RFC 1/4] xen-netback: Factor queue-specific data
into queue struct.  To:        Paul Durrant
<Paul.Durrant@xxxxxxxxxx>,"xen-devel@xxxxxxxxxxxxxxxxxxxx"
<xen-devel@xxxxxxxxxxxxxxxxxxxx> Cc:        Ian Campbell
<Ian.Campbell@xxxxxxxxxx>,Wei Liu <wei.liu2@xxxxxxxxxx> Bcc:
-=-=-=-=-=-=-=-=-=# Don't remove this line #=-=-=-=-=-=-=-=-=- On
16/01/14 11:03, Paul Durrant wrote:
-----Original Message----- From: Andrew Bennieston
[mailto:andrew.bennieston@xxxxxxxxxx] Sent: 16 January 2014 10:39 To:
Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx Cc: Ian Campbell; Wei
Liu Subject: Re: [PATCH RFC 1/4] xen-netback: Factor queue-specific
data into queue struct.

On 16/01/14 10:23, Paul Durrant wrote:
-----Original Message----- From: Andrew J. Bennieston
[mailto:andrew.bennieston@xxxxxxxxxx] Sent: 15 January 2014 16:23
To: xen-devel@xxxxxxxxxxxxxxxxxxxx Cc: Ian Campbell; Wei Liu; Paul
Durrant; Andrew Bennieston Subject: [PATCH RFC 1/4] xen-netback:
Factor queue-specific data into queue struct.

From: "Andrew J. Bennieston" <andrew.bennieston@xxxxxxxxxx>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one
is configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in
preparation for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and
uses skb_get_rxhash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@xxxxxxxxxx>
--- drivers/net/xen-netback/common.h    |   66 +++--
drivers/net/xen-netback/interface.c |  308 +++++++++++++--------
drivers/net/xen-netback/netback.c   |  516
+++++++++++++++++---------
----
----- drivers/net/xen-netback/xenbus.c    |   89 ++++-- 4 files
changed, 556 insertions(+), 423 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
netback/common.h index c47794b..54d2eeb 100644 ---
a/drivers/net/xen-netback/common.h +++
b/drivers/net/xen-netback/common.h @@ -108,17 +108,19 @@ struct
xenvif_rx_meta { */ #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
XEN_NETIF_RX_RING_SIZE)

-struct xenvif { -      /* Unique identifier for this interface. */ -
domid_t          domid; -       unsigned int     handle; +struct xenvif;
+ +struct xenvif_queue { /* Per-queue data for xenvif */ +      unsigned
int number; /* Queue number, 0-based */ +       char name[IFNAMSIZ+4];
/* DEVNAME-qN */

I wonder whether it would be neater to #define the name size here...


Absolutely. I'll do this in V2.

+       struct xenvif *vif; /* Parent VIF */

        /* Use NAPI for guest TX */ struct napi_struct napi; /* When
        feature-split-event-channels = 0, tx_irq = rx_irq. */
        unsigned int tx_irq; /* Only used when
        feature-split-event-channels = 1 */ -   char
        tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */ +     char
        tx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-tx */

...and the IRQ name size here. It's kind of ugly to have +
some_magic_value
in array definitions.


As above.

        struct xen_netif_tx_back_ring tx; struct sk_buff_head
        tx_queue; struct page *mmap_pages[MAX_PENDING_REQS]; @@
        -140,7 +142,7 @@ struct xenvif { /* When
        feature-split-event-channels = 0, tx_irq = rx_irq. */
        unsigned int rx_irq; /* Only used when
        feature-split-event-channels = 1 */ -   char
        rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */ +     char
        rx_irq_name[IFNAMSIZ+7]; /* DEVNAME-qN-rx */ struct
        xen_netif_rx_back_ring rx; struct sk_buff_head rx_queue;

@@ -150,14 +152,27 @@ struct xenvif { */ RING_IDX rx_req_cons_peek;

-       /* This array is allocated seperately as it is large */ -
struct gnttab_copy *grant_copy_op; +    struct gnttab_copy
grant_copy_op[MAX_GRANT_COPY_OPS];

I see you brought this back in line, which is reasonable as the
queue is now
a separately allocated struct.


Indeed; trying to keep the number of separate allocs/frees to a
minimum, for everybody's sanity!


        /* We create one meta structure per ring request we consume,
        so * the maximum number is the same as the ring size.  */
        struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];

+       /* Transmit shaping: allow 'credit_bytes' every 'credit_usec'.
*/ +    unsigned long   credit_bytes; + unsigned long   credit_usec;
+       unsigned long   remaining_credit; +     struct timer_list
credit_timeout; +       u64 credit_window_start; + +}; + +struct xenvif
{ +     /* Unique identifier for this interface. */ +   domid_t
domid; +        unsigned int     handle; + u8
fe_dev_addr[6];

        /* Frontend feature information. */ @@ -171,12 +186,9 @@
        struct xenvif { /* Internal feature information. */ u8
        can_queue:1;        /* can queue packets for receiver? */

-       /* Transmit shaping: allow 'credit_bytes' every 'credit_usec'.
*/ -    unsigned long   credit_bytes; - unsigned long   credit_usec;
-       unsigned long   remaining_credit; -     struct timer_list
credit_timeout; -       u64 credit_window_start; +      /* Queues */ +
unsigned int num_queues; +      struct xenvif_queue *queues;

        /* Statistics */

Do stats need to be per-queue (and then possibly aggregated at query
time)?


Aside from the potential to see the stats for each queue, which may
be useful in some limited circumstances for performance testing or
debugging, I don't see what this buys us...


Well, if you have multiple queues running simultaneously, do you make
sure global stats are atomically adjusted? I didn't see any code to
that effect, and since atomic ops are expensive it's usually better to
keep per queue stats and aggregate at the point of query.
Good point. I'll do per-queue stats in V2.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.