[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Minios-devel] [UNIKRAFT PATCH v3 02/12] lib/uknetdev: Add alignment size for packet buffers allocations




On 8/12/20 2:34 PM, Costin Lupu wrote:
Hi Sharan,

Please see inline.

On 8/12/20 3:04 PM, Sharan Santhanam wrote:
Hello Costin,

On 8/11/20 12:40 PM, Costin Lupu wrote:
Hi Sharan,

Please see inline.

On 8/11/20 11:40 AM, Sharan Santhanam wrote:
Hello Costin,

Thank you for the work. Please find the comments inline.

Thanks & Regards
Sharan

On 3/3/20 3:13 PM, Costin Lupu wrote:
Packet buffers may need different alignments, depending on their driver
requirements. For Xen netfront driver, packet buffers have to be page
aligned
because they are saved in pages shared between backend and frontend.
For KVM
virtio net driver, word size alignments suffice. Therefore, we need to
save the
alignment size in order to support multiple drivers.

This patch also squeezes a small fix for setting the max_mtu when
retrieving
device information from the virtio net driver.

Signed-off-by: Costin Lupu <costin.lupu@xxxxxxxxx>
---
    lib/uknetdev/include/uk/netdev_core.h | 1 +
    plat/drivers/virtio/virtio_net.c      | 2 ++
    2 files changed, 3 insertions(+)

diff --git a/lib/uknetdev/include/uk/netdev_core.h
b/lib/uknetdev/include/uk/netdev_core.h
index f073e101..268d54d7 100644
--- a/lib/uknetdev/include/uk/netdev_core.h
+++ b/lib/uknetdev/include/uk/netdev_core.h
@@ -94,6 +94,7 @@ struct uk_netdev_info {
        uint16_t max_mtu;   /**< Maximum supported MTU size. */
        uint16_t nb_encap_tx;  /**< Number of bytes required as headroom
for tx. */
        uint16_t nb_encap_rx;  /**< Number of bytes required as headroom
for rx. */
+    uint16_t align;  /**< Alignment required for data address. */
Why do we need to introduce yet another align field. Each of the tx
queue and rx queue has a `uk_netdev_queue_info` which have the align
fields. Since packet buffers are more tied to the rxq and txq why dont
we use those align field.
Indeed, I overlooked the nb_align field from the queue info. However, it
isn't initialized or used anywhere in the code. Besides this, isn't
alignment a property of the device rather than the queue? Are there
network devices with different alignments for their queues? Maybe we
should drop the field from the queue and use the one from device info
instead.

I needed the alignment for allocating buffers in lwip (please see
function `netbuf_alloc_helper_init()` in lwip glue code). All the
information needed for allocating buffers seems to be device specific,
not queue specific. If we had used the nb_field of queues then we would
have a buffer allocation helper for each queue.
The alignment property wasn't used so far because we did not have
alignment requirement for the other driver. Virtio had requirement on
the number of descriptors should be a power of two and i guess that
would be set in the virtio drivers.

  I see your point on the necessity to have a uniform queue information
for all the rx/tx queues of a netdevice. But we could get around the
problem, by getting the queue info for a one queue setting it all the
different instance of rx and tx queue while still maintaining the
configuration granularity of a queue.
Ok, so let me see if I got this right. Are you saying that I should get
the nb_align values only for the first rx queue and the first tx queue
and that would be enough?

Yes in this case.



The other option might be to move the `uk_netdev_queue_info` as a
struct into the `netdev_info` with 2 different fields as `rxq_info`
and `txq_info` and fetch the information only once when configuring
the respective queue. This also leads to the question on where we
should use the tx_headroom and rx_headroom be defined. Since the
scope of the patch series changes, we can split it as a separate
patch series. For the current problem, I would use the `nb_align`
field from `uk_netdev_queue_info` since the queue properties would
remain either ways.

In the netdev API, both the rx_one and tx_one operation happens at the
queue level and not on a device level since we select the queue on which
send/receive the packet from the netdevice. The netdev API have taken
the multiqueue in account. The receive part on LWIP happens also at the
queue level as all the callback are set for a queue configuration, while
the tx part is still the one were netdevice where we don't use the queue
information yet.

`netbuf_alloc_helper_init` does not have any dependency to a netdevice
except we pass the netdev_info to it and collapse the netdev_info to a
flat structure for that interface which is a valid method for lwip to
handle its memory allocation. A suggestion might be fetch the device and
queue information as a part of the `netbuf_alloc_helper_init`. But the
needs of LWIP may not cause a change to netdev as it is a specific use
case. The other use case of running DPDK on Unikraft does not have these
requirements. I would keep the netdev API flexible enough and handle the
specific use case at a higher level than netdev.


Thanks & Regards

Sharan

    };
      /**
diff --git a/plat/drivers/virtio/virtio_net.c
b/plat/drivers/virtio/virtio_net.c
index efc2cb71..9f1873c5 100644
--- a/plat/drivers/virtio/virtio_net.c
+++ b/plat/drivers/virtio/virtio_net.c
@@ -1048,8 +1048,10 @@ static void virtio_net_info_get(struct
uk_netdev *dev,
          dev_info->max_rx_queues = vndev->max_vqueue_pairs;
        dev_info->max_tx_queues = vndev->max_vqueue_pairs;
+    dev_info->max_mtu = vndev->max_mtu;
        dev_info->nb_encap_tx = sizeof(struct virtio_net_hdr_padded);
        dev_info->nb_encap_rx = sizeof(struct virtio_net_hdr_padded);
+    dev_info->align = sizeof(void *); /* word size alignment */
    }
      static int virtio_net_start(struct uk_netdev *n)



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.