[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v8][PATCH 02/17] introduce XEN_DOMCTL_set_rdm



>>> On 01.12.14 at 10:24, <tiejun.chen@xxxxxxxxx> wrote:
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -34,6 +34,7 @@
>  #include <xen/tasklet.h>
>  #include <xsm/xsm.h>
>  #include <asm/msi.h>
> +#include <xen/stdbool.h>

Please don't - we use bool_t in the hypervisor, not bool. The header
only exists for source code shared with the tools.

> @@ -1553,6 +1554,44 @@ int iommu_do_pci_domctl(
>          }
>          break;
>  
> +    case XEN_DOMCTL_set_rdm:
> +    {
> +        struct xen_domctl_set_rdm *xdsr = &domctl->u.set_rdm;
> +        struct xen_guest_pcidev_info *pcidevs = NULL;
> +        struct domain *d = rcu_lock_domain_by_any_id(domctl->domain);

"d" gets passed into this function - no need to shadow the variable
and (wrongly) re-obtain the pointer.

> +
> +        if ( d == NULL )
> +            return -ESRCH;
> +
> +        d->arch.hvm_domain.pci_force =
> +                            xdsr->flags & PCI_DEV_RDM_CHECK ? true : false;
> +        d->arch.hvm_domain.num_pcidevs = xdsr->num_pcidevs;

You shouldn't set the count before setting the pointer.

> +        d->arch.hvm_domain.pcidevs = NULL;
> +
> +        if ( xdsr->num_pcidevs )
> +        {
> +            pcidevs = xmalloc_array(xen_guest_pcidev_info_t,
> +                                    xdsr->num_pcidevs);

New domctl-s must not represent security risks: xdsr->num_pcidevs
can be (almost) arbitrarily large - do you really want to allow such
huge allocations? A reasonable upper bound could for example be
the total number of PCI devices the hypervisor knows about.

> +            if ( pcidevs == NULL )
> +            {
> +                rcu_unlock_domain(d);
> +                return -ENOMEM;
> +            }
> +
> +            if ( copy_from_guest(pcidevs, xdsr->pcidevs,
> +                                 xdsr->num_pcidevs*sizeof(*pcidevs)) )
> +            {
> +                xfree(pcidevs);
> +                rcu_unlock_domain(d);
> +                return -EFAULT;
> +            }
> +        }
> +
> +        d->arch.hvm_domain.pcidevs = pcidevs;

If the operation gets issued more than once for a given domain,
you're leaking the old pointer here. Overall should think a bit
more about this multiple use case (or outright disallow it).

> --- a/xen/drivers/passthrough/vtd/dmar.c
> +++ b/xen/drivers/passthrough/vtd/dmar.c
> @@ -674,6 +674,14 @@ acpi_parse_one_rmrr(struct acpi_dmar_header *header)
>                          "  RMRR region: base_addr %"PRIx64
>                          " end_address %"PRIx64"\n",
>                          rmrru->base_address, rmrru->end_address);
> +            /*
> +             * TODO: we may provide a precise paramter just to reserve
> +             * RMRR range specific to one device.
> +             */
> +            dprintk(XENLOG_WARNING VTDPREFIX,
> +                    "So please set pci_rdmforce to reserve these ranges"
> +                    " if you need such a device in hotplug case.\n");

It makes no sense to use dprintk() here. I also don't see how this
message relates to whatever may have been logged immediately
before, so the wording ("So please set ...") is questionable. Nor is the
reference to "hotplug case" meaningful here - in this context, only
physical (host) device hotplug can be meant without further
qualification. In the end I think trying to log something here is just
wrong - simply drop the message and make sure whatever you want
to say can be found easily by looking elsewhere.

> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -90,6 +90,10 @@ struct hvm_domain {
>      /* Cached CF8 for guest PCI config cycles */
>      uint32_t                pci_cf8;
>  
> +    bool_t                  pci_force;
> +    uint32_t                num_pcidevs;
> +    struct xen_guest_pcidev_info      *pcidevs;

Without a comment all these field names are pretty questionable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.