[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/4] iommu / pci: re-implement XEN_DOMCTL_get_device_group...


  • To: Paul Durrant <paul.durrant@xxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 16 Jul 2019 13:28:15 +0200
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@xxxxxxxxxx; spf=Pass smtp.mailfrom=roger.pau@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Tue, 16 Jul 2019 11:28:29 +0000
  • Ironport-sdr: qxdgzCJ/jffs5ebQTcnqIQtHlSBHkoLWyvz6b5tyalgGwQUUkILfRRAOQ9vcewYga0HT+exI9o fmwydzBP36DdqGxJqfW+6ZlZHQBh9wcH2bzbtqt/RFSAEkse1uBM9yngcXt1qpTlJfrFABr2Gs +HuP1Aufp1QMetu49Pt5op6Q39IoUaDKqcbf9x9a6sPCi8b8RSlJ8kJE6LFnp2UTDpE+I+0u3X OpqF5ARkw8blEzT5u9S0GXMF8ELcXG5poKI9qYaRYbnHQyTl8flkykIfworaNbHcnWXTrplLIH 1OE=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Jul 16, 2019 at 11:16:57AM +0100, Paul Durrant wrote:
> ... using the new iommu_group infrastructure.
> 
> Because 'sibling' devices are now members of the same iommu_group,
> implement the domctl by looking up the iommu_group of the pdev with the
> matching SBDF and then finding all the assigned pdevs that are in the
> group.
> 
> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> ---
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> 
> v3:
>  - Make 'max_sdevs' parameter in iommu_get_device_group() unsigned.
>  - Add missing check of max_sdevs to avoid buffer overflow.
> 
> v2:
>  - Re-implement in the absence of a per-group devs list.
>  - Make use of pci_sbdf_t.
> ---
>  xen/drivers/passthrough/groups.c | 46 ++++++++++++++++++++++++++++++++++++
>  xen/drivers/passthrough/pci.c    | 51 
> ++--------------------------------------
>  xen/include/xen/iommu.h          |  3 +++
>  3 files changed, 51 insertions(+), 49 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/groups.c 
> b/xen/drivers/passthrough/groups.c
> index c6d00980b6..4e6e8022c1 100644
> --- a/xen/drivers/passthrough/groups.c
> +++ b/xen/drivers/passthrough/groups.c
> @@ -12,8 +12,12 @@
>   * GNU General Public License for more details.
>   */
>  
> +#include <xen/guest_access.h>
>  #include <xen/iommu.h>
> +#include <xen/pci.h>
>  #include <xen/radix-tree.h>
> +#include <xen/sched.h>
> +#include <xsm/xsm.h>
>  
>  struct iommu_group {
>      unsigned int id;
> @@ -81,6 +85,48 @@ int iommu_group_assign(struct pci_dev *pdev, void *arg)
>      return 0;
>  }
>  
> +int iommu_get_device_group(struct domain *d, pci_sbdf_t sbdf,
> +                           XEN_GUEST_HANDLE_64(uint32) buf,
> +                           unsigned int max_sdevs)
> +{
> +    struct iommu_group *grp = NULL;
> +    struct pci_dev *pdev;
> +    unsigned int i = 0;
> +
> +    pcidevs_lock();
> +
> +    for_each_pdev ( d, pdev )
> +    {
> +        if ( pdev->sbdf.sbdf == sbdf.sbdf )
> +        {
> +            grp = pdev->grp;
> +            break;
> +        }
> +    }
> +
> +    if ( !grp )
> +        goto out;
> +
> +    for_each_pdev ( d, pdev )
> +    {
> +        if ( xsm_get_device_group(XSM_HOOK, pdev->sbdf.sbdf) ||
> +             pdev->grp != grp )
> +            continue;
> +
> +        if ( i < max_sdevs &&

AFAICT you are adding the check here in order to keep current
behaviour?

But isn't it wrong to not report to the caller that the buffer was
smaller than required, and that the returned result is partial?

I don't see any way a caller can differentiate between a result that
uses the full buffer and one that's actually partial due to smaller
than required buffer provided. I think this function should return
-ENOSPC for such case.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.