[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V1 10/12] libxl: Add support for virtio-disk configuration



On Mon, 3 Aug 2020, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
> 
> This patch adds basic support for configuring and assisting virtio-disk
> backend (emualator) which is intended to run out of Qemu and could be run
> in any domain.
> 
> Xenstore was chosen as a communication interface for the emulator running
> in non-toolstack domain to be able to get configuration either by reading
> Xenstore directly or by receiving command line parameters (an updated 'xl 
> devd'
> running in the same domain would read Xenstore beforehand and call backend
> executable with the required arguments).
> 
> An example of domain configuration (two disks are assigned to the guest,
> the latter is in readonly mode):
> 
> vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]
> 
> Where per-disk Xenstore entries are:
> - filename and readonly flag (configured via "vdisk" property)
> - base and irq (allocated dynamically)
> 
> Besides handling 'visible' params described in configuration file,
> patch also allocates virtio-mmio specific ones for each device and
> writes them into Xenstore. virtio-mmio params (irq and base) are
> unique per guest domain, they allocated at the domain creation time
> and passed through to the emulator. Each VirtIO device has at least
> one pair of these params.
> 
> TODO:
> 1. An extra "virtio" property could be removed.
> 2. Update documentation.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
> ---
>  tools/libxl/Makefile                 |   4 +-
>  tools/libxl/libxl_arm.c              |  63 +++++++++++++++----
>  tools/libxl/libxl_create.c           |   1 +
>  tools/libxl/libxl_internal.h         |   1 +
>  tools/libxl/libxl_types.idl          |  15 +++++
>  tools/libxl/libxl_types_internal.idl |   1 +
>  tools/libxl/libxl_virtio_disk.c      | 109 +++++++++++++++++++++++++++++++++
>  tools/xl/Makefile                    |   2 +-
>  tools/xl/xl.h                        |   3 +
>  tools/xl/xl_cmdtable.c               |  15 +++++
>  tools/xl/xl_parse.c                  | 115 
> +++++++++++++++++++++++++++++++++++
>  tools/xl/xl_virtio_disk.c            |  46 ++++++++++++++
>  12 files changed, 360 insertions(+), 15 deletions(-)
>  create mode 100644 tools/libxl/libxl_virtio_disk.c
>  create mode 100644 tools/xl/xl_virtio_disk.c
> 
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 38cd43a..df94b13 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -141,7 +141,9 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o 
> libxl_dm.o libxl_pci.o \
>                       libxl_vtpm.o libxl_nic.o libxl_disk.o libxl_console.o \
>                       libxl_cpupool.o libxl_mem.o libxl_sched.o libxl_tmem.o \
>                       libxl_9pfs.o libxl_domain.o libxl_vdispl.o \
> -                     libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o $(LIBXL_OBJS-y)
> +                     libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o \
> +                     libxl_virtio_disk.o $(LIBXL_OBJS-y)
> +
>  LIBXL_OBJS += libxl_genid.o
>  LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
>  
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 4f748e3..469a8b0 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -13,6 +13,12 @@
>  #define GUEST_VIRTIO_MMIO_SIZE  xen_mk_ullong(0x200)
>  #define GUEST_VIRTIO_MMIO_SPI   33
>  
> +#ifndef container_of
> +#define container_of(ptr, type, member) ({                   \
> +        typeof( ((type *)0)->member ) *__mptr = (ptr);       \
> +        (type *)( (char *)__mptr - offsetof(type,member) );})
> +#endif
> +
>  static const char *gicv_to_string(libxl_gic_version gic_version)
>  {
>      switch (gic_version) {
> @@ -44,14 +50,32 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
>          vuart_enabled = true;
>      }
>  
> -    /*
> -     * XXX: Handle properly virtio
> -     * A proper solution would be the toolstack to allocate the interrupts
> -     * used by each virtio backend and let the backend now which one is used
> -     */
>      if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) {
> -        nr_spis += (GUEST_VIRTIO_MMIO_SPI - 32) + 1;
> +        uint64_t virtio_base;
> +        libxl_device_virtio_disk *virtio_disk;
> +
> +        virtio_base = GUEST_VIRTIO_MMIO_BASE;
>          virtio_irq = GUEST_VIRTIO_MMIO_SPI;
> +
> +        if (!d_config->num_virtio_disks) {
> +            LOG(ERROR, "Virtio is enabled, but no Virtio devices present\n");
> +            return ERROR_FAIL;
> +        }
> +        virtio_disk = &d_config->virtio_disks[0];
> +
> +        for (i = 0; i < virtio_disk->num_disks; i++) {
> +            virtio_disk->disks[i].base = virtio_base;
> +            virtio_disk->disks[i].irq = virtio_irq;
> +
> +            LOG(DEBUG, "Allocate Virtio MMIO params: IRQ %u BASE 0x%"PRIx64,
> +                virtio_irq, virtio_base);
> +
> +            virtio_irq ++;
> +            virtio_base += GUEST_VIRTIO_MMIO_SIZE;
> +        }
> +        virtio_irq --;
> +
> +        nr_spis += (virtio_irq - 32) + 1;

It looks like it is an interrupt per device, which could lead to quite a
few of them being allocated.

The issue is that today we don't really handle virtual interrupts
different from physical interrupts in Xen. So, if we end up allocating
let's say 6 virtio interrupts for a domain, the chance of a clash with a
physical interrupt of a passthrough device is real.

I am not entirely sure how to solve it, but these are a few ideas:
- choosing virtio interrupts that are less likely to conflict (maybe >
  1000)
- make the virtio irq (optionally) configurable so that a user could
  override the default irq and specify one that doesn't conflict
- implementing support for virq != pirq (even the xl interface doesn't
  allow to specify the virq number for passthrough devices, see "irqs")



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.