[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/4] iommu / pci: re-implement XEN_DOMCTL_get_device_group...


  • To: Paul Durrant <paul.durrant@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Wed, 24 Jul 2019 15:27:42 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gnCTVGLK0+PrVkQfCHxm8r/Tn4UxxO+CjEZ2pCJZEis=; b=EQZS6hIyXHka1PkVKwUqwL4GGK3HfVl6DvX31UNxGfBvbupmf+IVRLuc9NcXBpcYgd02ad/VfVnUncfPhCatrpTGuCzgAJ2YGQwrhzSDh1r7wM+WXk/q/8IUWg0CHwU8hJ7+bKrzxB8OPBD/xZyjU9lP53XcS/Me539vcSlqO2eMrEkkCtl0YabXXkGrPmL6cg349SWBO1xJKJcNIyv61/o3DU3+wK0y5bUePKEmZe494H2rBo2ockSNHQ4tEb+lfT7xG+NZrOyIc7L835HC1ZtQsKiLeIUJE3JoACmm+S5n9tghaFLMDIx+EjpH2NcpbLUitzki3lL2p7UbSPgnlA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gRNbAEOUB7FjPly+KnJZcOk1U6BojA5VhcFXLQSY7FOFTXEVdjXOYlku43U2dVeCEPhXGKpKe99ELVclc9PY9bB8Nwg2T/jMGXaVmtUKE/rrXoL0sVXq5EN64vXFdmDxBNAVse3q3RQQ7XLZObOLmslCTgv3BsvuI4eMB6BNxdEVDo0KN0S1WyFeDerc7GEieSoEDRBjXSjundehn0s/PKAz0XIhXUvI6N/v8yoRxdwCz0IJor7Y/IXTNTTJJPiWT6YgP46CrITDL6q+f38X6dD+Bkt1a0+JSSCS8ti8dkOtkwsTwLyPPUwsBPPBLhL5w9noSRqurYdKQV8bnA4ctw==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 24 Jul 2019 15:29:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVO7+30zOrUBDJqkW0fIy31yR4g6bZ8SWA
  • Thread-topic: [PATCH v3 4/4] iommu / pci: re-implement XEN_DOMCTL_get_device_group...

On 16.07.2019 12:16, Paul Durrant wrote:
> +int iommu_get_device_group(struct domain *d, pci_sbdf_t sbdf,
> +                           XEN_GUEST_HANDLE_64(uint32) buf,
> +                           unsigned int max_sdevs)
> +{
> +    struct iommu_group *grp = NULL;
> +    struct pci_dev *pdev;
> +    unsigned int i = 0;
> +
> +    pcidevs_lock();
> +
> +    for_each_pdev ( d, pdev )
> +    {
> +        if ( pdev->sbdf.sbdf == sbdf.sbdf )
> +        {
> +            grp = pdev->grp;
> +            break;
> +        }
> +    }
> +
> +    if ( !grp )
> +        goto out;
> +
> +    for_each_pdev ( d, pdev )
> +    {
> +        if ( xsm_get_device_group(XSM_HOOK, pdev->sbdf.sbdf) ||
> +             pdev->grp != grp )
> +            continue;
> +
> +        if ( i < max_sdevs &&
> +             unlikely(copy_to_guest_offset(buf, i++, &pdev->sbdf.sbdf, 1)) )

If you want to avoid breaking existing callers, you'll have to mimic
here ...

> -static int iommu_get_device_group(
> -    struct domain *d, u16 seg, u8 bus, u8 devfn,
> -    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
> -{
> -    const struct domain_iommu *hd = dom_iommu(d);
> -    struct pci_dev *pdev;
> -    int group_id, sdev_id;
> -    u32 bdf;
> -    int i = 0;
> -    const struct iommu_ops *ops = hd->platform_ops;
> -
> -    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
> -        return 0;
> -
> -    group_id = ops->get_device_group_id(seg, bus, devfn);
> -
> -    pcidevs_lock();
> -    for_each_pdev( d, pdev )
> -    {
> -        if ( (pdev->seg != seg) ||
> -             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
> -            continue;
> -
> -        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | 
> pdev->devfn) )
> -            continue;
> -
> -        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
> -        if ( (sdev_id == group_id) && (i < max_sdevs) )
> -        {
> -            bdf = 0;
> -            bdf |= (pdev->bus & 0xff) << 16;
> -            bdf |= (pdev->devfn & 0xff) << 8;
> -
> -            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )

... this rather odd organization of BDF. Omitting the segment is, I
think, fine, as I don't expect groups to extend past segment
boundaries (and iirc neither Intel's nor AMD's implementation have
any means for this to happen).

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.