[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [UNIKRAFT/LWIP PATCH v4 1/5] uknetdev: Per device `lwip_data` to callbacks



On 06.11.20 12:25, Sharan Santhanam wrote:
Hello Simon,

Thanks for the work. I have a comment on the use of allocator but the rest is fine.

Thanks & Regards

Sharan

On 10/30/20 7:31 PM, Simon Kuenzer wrote:
In order to hand-over per device `lwip_data` to the rx/tx packet allocation
function (`netif_alloc_rxpkts()`), we move storing the reference to the
allocator that used for packet allocations to `lwip_data`. In the future,
this can be used to enable per device buffer pools (e.g.,
with `ukallocpool`) for faster allocation and freeing requests.

Signed-off-by: Simon Kuenzer <simon.kuenzer@xxxxxxxxx>
---
  Config.uk  |  2 +-
  uknetdev.c | 32 ++++++++++++++++++++++----------
  2 files changed, 23 insertions(+), 11 deletions(-)

diff --git a/Config.uk b/Config.uk
index e2eacb5..32deb3a 100644
--- a/Config.uk
+++ b/Config.uk
@@ -25,7 +25,7 @@ config LWIP_UKNETDEV
  config LWIP_UKNETDEV_SCRATCH
      int
-    default 32
+    default 48
      help
          The network stack reserves space in the uknetdev device for its
          use. Please do not change this value and in case change this
diff --git a/uknetdev.c b/uknetdev.c
index e794c37..e645ac1 100644
--- a/uknetdev.c
+++ b/uknetdev.c
@@ -64,6 +64,7 @@
  struct lwip_netdev_data {
      uint32_t features;
+    struct uk_alloc *pkt_a;
Shouldn't we use the allocator of the specific ring/queue instead of the per netdevice. The netdev library provides us the flexibility but currently, we have a single queue and a common allocator so it wont make a big difference. Maybe we should reconsider this when we support multi-queue support. Maybe we could add a TODO comment here.

In fact, we can reconsider this. A TODO comment does not hurt at this point, I will add something. Thanks.

  #ifdef CONFIG_HAVE_SCHED
      struct uk_thread *poll_thread; /* Thread per device */
      char *_name; /* Thread name */
@@ -71,6 +72,15 @@ struct lwip_netdev_data {
  #endif /* CONFIG_HAVE_SCHED */
  };
+/*
+ * Compile-time assertion that ensures that the uknetdev scratch pad can fit + * `struct lwip_netdev_data`. In case this is not fulfilled, please adopt
+ * LWIP_UKNETDEV_SCRATCH in `Config.uk`. The purpose of using the
+ * scratch pad is performance: `struct lwip_netdev_data` is on the same
+ * allocation as `struct uknetdev`. Cache-locality can be utilized better.
+ */
+UK_CTASSERT((sizeof(struct lwip_netdev_data)) <= CONFIG_UK_NETDEV_SCRATCH_SIZE);
+
  /*
   * Global headroom settings for buffer allocations used on receive
   * and transmit. We are taking the maximum of all uknetdev devices as
@@ -89,15 +99,17 @@ static uint16_t tx_headroom = ETH_PAD_SIZE;
  static uint16_t netif_alloc_rxpkts(void *argp, struct uk_netbuf *nb[],
                     uint16_t count)
  {
-    struct uk_alloc *a;
+    struct lwip_netdev_data *lwip_data;
      uint16_t i;
      UK_ASSERT(argp);
-    a = (struct uk_alloc *) argp;
+    lwip_data = (struct lwip_netdev_data *) argp;
      for (i = 0; i < count; ++i) {
-        nb[i] = lwip_alloc_netbuf(a, UKNETDEV_BUFLEN, rx_headroom);
+        nb[i] = lwip_alloc_netbuf(lwip_data->pkt_a,
+                      UKNETDEV_BUFLEN,
+                      rx_headroom);
          if (!nb[i]) {
              /* we run out of memory */
              break;
@@ -109,8 +121,8 @@ static uint16_t netif_alloc_rxpkts(void *argp, struct uk_netbuf *nb[],
  static err_t uknetdev_output(struct netif *nf, struct pbuf *p)
  {
-    struct uk_alloc *a;
      struct uk_netdev *dev;
+    struct lwip_netdev_data *lwip_data;
      struct pbuf *q;
      struct uk_netbuf *nb;
      void *allocation;
@@ -120,18 +132,17 @@ static err_t uknetdev_output(struct netif *nf, struct pbuf *p)
      UK_ASSERT(nf);
      dev = netif_to_uknetdev(nf);
      UK_ASSERT(dev);
+    lwip_data = (struct lwip_netdev_data *) dev->scratch_pad;
+    UK_ASSERT(lwip_data);
-    a = uk_alloc_get_default();
-    if (!a)
-        return ERR_MEM;
-
-    allocation = uk_malloc(a, UKNETDEV_BUFLEN);
+    allocation = uk_malloc(lwip_data->pkt_a, UKNETDEV_BUFLEN);
      if (!allocation)
          return ERR_MEM;
      nb = uk_netbuf_prepare_buf(allocation, UKNETDEV_BUFLEN,
                     tx_headroom, 0, NULL);
      UK_ASSERT(nb);
      nb->_a = a; /* register allocator for free operation */
+    nb->_b = allocation;
      if (unlikely(p->tot_len > uk_netbuf_tailroom(nb))) {
          LWIP_DEBUGF(NETIF_DEBUG,
@@ -479,6 +490,7 @@ err_t uknetdev_init(struct netif *nf)
      if (!info.max_rx_queues || !info.max_tx_queues)
          return ERR_IF;
      lwip_data->features = info.features;
+    lwip_data->pkt_a    = a;
      /*
       * Update our global (rx|tx)_headroom setting that we use for
@@ -514,7 +526,7 @@ err_t uknetdev_init(struct netif *nf)
       */
      rxq_conf.a = a;
      rxq_conf.alloc_rxpkts = netif_alloc_rxpkts;
-    rxq_conf.alloc_rxpkts_argp = a;
+    rxq_conf.alloc_rxpkts_argp = lwip_data;
  #ifdef CONFIG_LWIP_NOTHREADS
      /*
       * In mainloop mode, we will not use interrupts.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.