[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Minios-devel] [UNIKRAFT PATCH v3 2/4] lib/uknetdev: Flag-based status report on rx and tx functions



Hello Simon,

Please find the comments inline.

Thanks & Regards
Sharan

On 1/31/19 1:04 AM, Simon Kuenzer wrote:
Introduce flag based status return codes on receive and transmit
functions. They are replacing the current enum-like return codes. The
flags are able to inform the API user about additional driver
states (e.g., queue underruns).

Signed-off-by: Simon Kuenzer <simon.kuenzer@xxxxxxxxx>
---
  lib/uknetdev/include/uk/netdev.h      | 107 ++++++++++++++++++++++----
  lib/uknetdev/include/uk/netdev_core.h |  11 +++
  plat/drivers/virtio/virtio_net.c      |  91 +++++++++++++---------
  3 files changed, 158 insertions(+), 51 deletions(-)

diff --git a/lib/uknetdev/include/uk/netdev.h b/lib/uknetdev/include/uk/netdev.h
index 18878400..88b89135 100644
--- a/lib/uknetdev/include/uk/netdev.h
+++ b/lib/uknetdev/include/uk/netdev.h
@@ -438,12 +438,18 @@ static inline int uk_netdev_rxq_intr_disable(struct 
uk_netdev *dev,
   *   Reference to netbuf pointer which will be point to the received packet
   *   after the function call. `pkt` has never to be `NULL`.
   * @return
- *   - (0): No packet available
- *   - (1): `pkt` points to received netbuf
- *   - (2): `pkt` points to received netbuf but more received packets are
- *          available on the receive queue. When interrupts are used, they are
- *          disabled until 1 is returned on subsequent calls
- *   - (<0): Error code from driver
+ *   - (>=0): Positive value with status flags
+ *     - UK_NETDEV_STATUS_SUCCESS: `pkt` points to received netbuf. Whenever
+ *        this flag is not set, there was no packet received.
+ *     - UK_NETDEV_STATUS_MORE: Indicates that more received packets are
+ *        available on the receive queue. When interrupts are used, they are
+ *        disabled until this flag is unset by a subsequent call.
+ *        This flag may only be set together with UK_NETDEV_STATUS_SUCCESS.
+ *     - UK_NETDEV_STATUS_UNDERRUN: Informs that some available slots of the
+ *        receive queue could not be programmed with a receive buffer. The
+ *        user-provided receive buffer allocator function returned with an 
error
+ *        (e.g., out of memory).
+ *   - (<0): Negative value with error code from driver, no packet is returned.
   */
  static inline int uk_netdev_rx_one(struct uk_netdev *dev, uint16_t queue_id,
                                   struct uk_netbuf **pkt)
@@ -474,13 +480,15 @@ static inline int uk_netdev_rx_one(struct uk_netdev *dev, 
uint16_t queue_id,
   *   for doing a transmission - inspect `nb_encap` with uk_netdev_info_get().
   *   `pkt` has never to be `NULL`.
   * @return
- *   - (0): No space left on transmit queue, `pkt` is not sent
- *   - (1): `pkt` was successfully put to the transmit queue,
- *          queue is currently full
- *   - (2): `pkt` was successfully put to the transmit queue,
- *          there is still at least one descriptor available for a
- *          subsequent transmission
- *   - (<0): Error code from driver, `pkt` is not sent
+ *   - (>=0): Positive value with status flags
+ *     - UK_NETDEV_STATUS_SUCCESS: `pkt` was successfully put to the transmit
+ *        queue. Whenever this flag is not set, there was no space left on the
+ *        transmit queue to send `pkt`.
+ *     - UK_NETDEV_STATUS_MORE: Indicates there is still at least one 
descriptor
+ *         available for a subsequent transmission. If the flag is unset means
+ *         that the transmit queue is full.
+ *         This flag may only be set together with UK_NETDEV_STATUS_SUCCESS.
+ *   - (<0): Negative value with error code from driver, no packet was sent.
   */
  static inline int uk_netdev_tx_one(struct uk_netdev *dev, uint16_t queue_id,
                                   struct uk_netbuf *pkt)
@@ -495,6 +503,79 @@ static inline int uk_netdev_tx_one(struct uk_netdev *dev, 
uint16_t queue_id,
        return dev->tx_one(dev, dev->_tx_queue[queue_id], pkt);
  }
+/**
+ * Tests for status flags returned by `uk_netdev_rx_one` or `uk_netdev_tx_one`.
+ * When the functions returned an error code or one of the selected flags is
+ * unset, this macro returns False.
+ *
+ * @param status
+ *   Return status (int)
+ * @param flag
+ *   Flag(s) to test
+ * @return
+ *   - (True):  All flags are set and status is not negative
+ *   - (False): At least one flag is not set or status is negative
+ */
+#define uk_netdev_status_test_set(status, flag)                        \
+       (((int)(status) & ((int)(flag) | INT_MIN)) == (flag))
+
+/**
+ * Tests for unset status flags returned by `uk_netdev_rx_one` or
+ * `uk_netdev_tx_one`. When the functions returned an error code orDuring 
Underrun, we will notify the host of the buffer we filled in.
T one of the
+ * selected the flags is set, this macro returns False.
+ *
+ * @param status
+ *   Return status (int)
+ * @param flag
+ *   Flag(s) to test
+ * @return
+ *   - (True):  Flags are not set and status is not negative
+ *   - (False): At least one flag is set or status is negative
+ */
+#define uk_netdev_status_test_unset(status, flag)                      \
+       (((int)(status) & ((int)(flag) | INT_MIN)) == (0x0))
+
+/**
+ * Tests if the return status of `uk_netdev_rx_one` or `uk_netdev_tx_one`
+ * indicates a successful operation (e.g., packet sent or received).
+ *
+ * @param status
+ *   Return status (int)
+ * @return
+ *   - (True):  Operation was successful
+ *   - (False): Operation was unsuccessful or error happend
+ */
+#define uk_netdev_status_successful(status)                    \
+       uk_netdev_status_test_set((status), UK_NETDEV_STATUS_SUCCESS)
+
+/**
+ * Tests if the return status of `uk_netdev_rx_one` or `uk_netdev_tx_one`
+ * indicates that the operation should be retried (e.g., packet sent or
+ * received).
+ *
+ * @param status
+ *   Return status (int)
+ * @return
+ *   - (True):  Operation should be retried
+ *   - (False): Operation was successful or error happened
+ */
+#define uk_netdev_status_notready(status)                              \
+       uk_netdev_status_test_unset((status), UK_NETDEV_STATUS_SUCCESS)
+
+/**
+ * Tests if the return status of `uk_netdev_rx_one` or `uk_netdev_tx_one`
+ * indicates that the last operation can be successfully repeatet again.
+ *
+ * @param status
+ *   Return status (int)
+ * @return
+ *   - (True):  Flag UK_NETDEV_STATUS_MORE is set
+ *   - (False): Operation was successful or error happened
+ */
+#define uk_netdev_status_more(status)                                  \
+       uk_netdev_status_test_set((status), (UK_NETDEV_STATUS_SUCCESS   \
+                                            | UK_NETDEV_STATUS_MORE))
+
  #ifdef __cplusplus
  }
  #endif
diff --git a/lib/uknetdev/include/uk/netdev_core.h 
b/lib/uknetdev/include/uk/netdev_core.h
index d30886de..f877f1e7 100644
--- a/lib/uknetdev/include/uk/netdev_core.h
+++ b/lib/uknetdev/include/uk/netdev_core.h
@@ -285,6 +285,17 @@ typedef int (*uk_netdev_rxq_intr_enable_t)(struct 
uk_netdev *dev,
  typedef int (*uk_netdev_rxq_intr_disable_t)(struct uk_netdev *dev,
                                            struct uk_netdev_rx_queue *queue);
+/**
+ * Status code flags returned by rx and tx functions
+ */
+/** Successful operation (packet received or transmitted). */
+#define UK_NETDEV_STATUS_SUCCESS  (0x1)
+/** More room available for operation (e.g., still space on queue for sending
+    or more packets available on receive queue */
+#define UK_NETDEV_STATUS_MORE     (0x2)
+/** Queue underrun (e.g., out-of-memory when allocating new receive buffers). 
*/
+#define UK_NETDEV_STATUS_UNDERRUN (0x4)
+
  /** Driver callback type to retrieve one packet from a RX queue. */
  typedef int (*uk_netdev_rx_one_t)(struct uk_netdev *dev,
                                  struct uk_netdev_rx_queue *queue,
diff --git a/plat/drivers/virtio/virtio_net.c b/plat/drivers/virtio/virtio_net.c
index 24ef63b0..cb771efe 100644
--- a/plat/drivers/virtio/virtio_net.c
+++ b/plat/drivers/virtio/virtio_net.c
@@ -213,8 +213,8 @@ static int virtio_netdev_rxq_dequeue(struct 
uk_netdev_rx_queue *rxq,
  static int virtio_netdev_rxq_enqueue(struct uk_netdev_rx_queue *rxq,
                                     struct uk_netbuf *netbuf);
  static int virtio_netdev_recv_done(struct virtqueue *vq, void *priv);
-static void virtio_netdev_rx_fillup(struct uk_netdev_rx_queue *rxq,
-                                   __u16 num, int notify);
+static int virtio_netdev_rx_fillup(struct uk_netdev_rx_queue *rxq,
+                                  __u16 num, int notify);
/**
   * Static global constants
@@ -268,12 +268,13 @@ static void virtio_netdev_xmit_free(struct 
uk_netdev_tx_queue *txq)
#define RX_FILLUP_BATCHLEN 64 -static void virtio_netdev_rx_fillup(struct uk_netdev_rx_queue *rxq,
-                                   __u16 nb_desc,
-                                   int notify)
+static int virtio_netdev_rx_fillup(struct uk_netdev_rx_queue *rxq,
+                                  __u16 nb_desc,
+                                  int notify)
  {
        struct uk_netbuf *netbuf[RX_FILLUP_BATCHLEN];
        int rc = 0;
+       int status = 0x0;
        __u16 i, j;
        __u16 req;
        __u16 cnt = 0;
@@ -305,7 +306,8 @@ static void virtio_netdev_rx_fillup(struct 
uk_netdev_rx_queue *rxq,
                                 */
                                for (j = i; j < cnt; j++)
                                        uk_netbuf_free(netbuf[j]);
-                               return;
+                               status |= UK_NETDEV_STATUS_UNDERRUN;
+                               return status;
During Underrun, we will notify the host of the buffer we filled in.
This has to be "goto out:" instead of return status.

                        }
                        filled += 2;
                }
@@ -313,6 +315,7 @@ static void virtio_netdev_rx_fillup(struct 
uk_netdev_rx_queue *rxq,
                if (unlikely(cnt < req)) {
                        uk_pr_debug("Incomplete fill-up of netbufs on receive 
virtqueue %p: Out of memory",
                                    rxq);
+                       status |= UK_NETDEV_STATUS_UNDERRUN;
                        goto out;
                }
        }
@@ -326,6 +329,8 @@ out:
         */
        if (notify && filled)
                virtqueue_host_notify(rxq->vq);
+
+       return status;
  }
static int virtio_netdev_xmit(struct uk_netdev *dev,
@@ -337,6 +342,7 @@ static int virtio_netdev_xmit(struct uk_netdev *dev,
        struct virtio_net_hdr_padded *padded_hdr;
        int16_t header_sz = sizeof(*padded_hdr);
        int rc = 0;
+       int status = 0x0;
        size_t total_len = 0;
        __u8  *buf_start;
        size_t buf_len;
@@ -361,7 +367,7 @@ static int virtio_netdev_xmit(struct uk_netdev *dev,
        if (unlikely(rc != 1)) {
                uk_pr_err("Failed to prepend virtio header\n");
                rc = -ENOSPC;
-               goto exit;
+               goto err_exit;
        }
        vhdr = pkt->data;
@@ -388,18 +394,18 @@ static int virtio_netdev_xmit(struct uk_netdev *dev,
        rc = uk_sglist_append(&queue->sg, vhdr, sizeof(*vhdr));
        if (unlikely(rc != 0)) {
                uk_pr_err("Failed to append to the sg list\n");
-               goto exit;
+               goto err_remove_vhdr;
        }
        rc = uk_sglist_append(&queue->sg, buf_start, buf_len);
        if (unlikely(rc != 0)) {
                uk_pr_err("Failed to append to the sg list\n");
-               goto exit;
+               goto err_remove_vhdr;
        }
        if (pkt->next) {
                rc = uk_sglist_append_netbuf(&queue->sg, pkt->next);
                if (unlikely(rc != 0)) {
                        uk_pr_err("Failed to append to the sg list\n");
-                       goto exit;
+                       goto err_remove_vhdr;
                }
        }
@@ -408,7 +414,7 @@ static int virtio_netdev_xmit(struct uk_netdev *dev,
                uk_pr_err("Packet size too big: %lu, max:%u\n",
                          total_len, VIRTIO_PKT_BUFFER_LEN);
                rc = -ENOTSUP;
-               goto remove_vhdr;
+               goto err_remove_vhdr;
        }
/**
@@ -417,31 +423,34 @@ static int virtio_netdev_xmit(struct uk_netdev *dev,
        rc = virtqueue_buffer_enqueue(queue->vq, pkt, &queue->sg,
                                      queue->sg.sg_nseg, 0);
        if (likely(rc >= 0)) {
+               status |= UK_NETDEV_STATUS_SUCCESS;
                /**
                 * Notify the host the new buffer.
                 */
                virtqueue_host_notify(queue->vq);
                /**
                 * When there is further space available in the ring
-                * return 2 else 1.
+                * return UK_NETDEV_STATUS_MORE.
                 */
-               rc = likely(rc > 0) ? 2 : 1;
+               status |= likely(rc > 0) ? UK_NETDEV_STATUS_MORE : 0x0;
        } else if (rc == -ENOSPC) {
                uk_pr_debug("No more descriptor available\n");
-               rc = 0;
-               goto remove_vhdr;
+               /**
+                * Remove header before exiting because we could not send
+                */
+               uk_netbuf_header(pkt, -header_sz);
        } else {
                uk_pr_err("Failed to enqueue descriptors into the ring: %d\n",
                          rc);
-               goto remove_vhdr;
+               goto err_remove_vhdr;
        }
+       return status;
-exit:
-       return rc;
-
-remove_vhdr:
+err_remove_vhdr:
        uk_netbuf_header(pkt, -header_sz);
-       goto exit;
+err_exit:
+       UK_ASSERT(rc < 0);
+       return rc;
  }
static int virtio_netdev_rxq_enqueue(struct uk_netdev_rx_queue *rxq,
@@ -529,8 +538,8 @@ static int virtio_netdev_recv(struct uk_netdev *dev,
                              struct uk_netdev_rx_queue *queue,
                              struct uk_netbuf **pkt)
  {
+       int status = 0x0;
        int rc = 0;
-       int cnt = 0;
UK_ASSERT(dev && queue);
        UK_ASSERT(pkt);
@@ -545,14 +554,14 @@ static int virtio_netdev_recv(struct uk_netdev *dev,
                uk_pr_err("Failed to dequeue the packet: %d\n", rc);
                goto err_exit;
        }
-       cnt = (*pkt) ? 1 : 0;
-       virtio_netdev_rx_fillup(queue, (queue->nb_desc - rc), 1);
+       status |= (*pkt) ? UK_NETDEV_STATUS_SUCCESS : 0x0;
+       status |= virtio_netdev_rx_fillup(queue, (queue->nb_desc - rc), 1);
/* Enable interrupt only when user had previously enabled it */
        if (queue->intr_enabled & VTNET_INTR_USR_EN_MASK) {
                /* Need to enable the interrupt on the last packet */
                rc = virtqueue_intr_enable(queue->vq);
-               if (rc == 1 && cnt == 0) {
+               if (rc == 1 && !(*pkt)) {
                        /**
                         * Packet arrive after reading the queue and before
                         * enabling the interrupt
@@ -563,29 +572,35 @@ static int virtio_netdev_recv(struct uk_netdev *dev,
                                          rc);
                                goto err_exit;
                        }
+                       status |= UK_NETDEV_STATUS_SUCCESS;
+
                        /* Need to enable the interrupt on the last packet */
                        rc = virtqueue_intr_enable(queue->vq);
-                       cnt = (rc == 1) ? 2 : 1;
-                       /* Since we received something, we need to fillup */
-                       virtio_netdev_rx_fillup(queue, (queue->nb_desc - rc), 
1);
-               } else if (cnt > 0) {
-                       /* When there is packet in the buffer */
-                       cnt = (rc == 1) ? 2 : 1;
+                       status |= (rc == 1) ? UK_NETDEV_STATUS_MORE : 0x0;
+
+                       /*
+                        * Since we received something, we need to fillup
+                        * and notify
+                        */
+                       status |= virtio_netdev_rx_fillup(queue,
+                                                         (queue->nb_desc - rc),
+                                                         1);
+               } else if (*pkt) {
+                       /* When we originally got a packet and there is more */
+                       status |= (rc == 1) ? UK_NETDEV_STATUS_MORE : 0x0;
                }
-       } else if (cnt > 0) {
+       } else if (*pkt) {
                /**
                 * For polling case, we report always there are further
                 * packets unless the queue is empty.
                 */
-               cnt = 2;
+               status |= UK_NETDEV_STATUS_MORE;
        }
-
-exit:
-       return cnt;
+       return status;
err_exit:
-       cnt = rc;
-       goto exit;
+       UK_ASSERT(rc < 0);
+       return rc;
  }
static struct uk_netdev_rx_queue *virtio_netdev_rx_queue_setup(


_______________________________________________
Minios-devel mailing list
Minios-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/minios-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.