[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 06/13] x86/IOMMU: don't restrict IRQ affinities to online CPUs


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: "Woods, Brian" <Brian.Woods@xxxxxxx>
  • Date: Wed, 24 Jul 2019 19:53:44 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=amd.com;dmarc=pass action=none header.from=amd.com;dkim=pass header.d=amd.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sYx+vI/vfTGqaSbnd3roZBc4b4KBFIoBkrQTKE8qYDo=; b=NcCCwZM6t42uf1lz3jiAVTsIHV8UY3pr4Slq/nbHyb6iuUqw7iVU6/v1HQJ2YfvNcG1aXGhF2CNrp8tB8sVaas+05m7563W596RmDq1kArwz8NNMyXLm2VwMLS9N1PukL0czzILkfvOVR9g+Wq7yvN+z4cDUp6bQP8K0cB8LyYccL/IYujkrslPJuKfzyq9mR8/u2IeiuhK6fMpOa0jnJ9OnxvpcORYjpGnfkFSJZ4/JuM41524862rT73UDPobSBLdl1/0F5ll9onnzt0iMi+42UyP/0jJ+OEtoI74azfJ0Xxhukb5nGSVn25aJ+QxL9G5DAJOk+KoUbI5Oe62LOw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ky30FN4vQIv8BxYGVsWNtICllqEnF6M1yAevBk5SHh+SeFNvfy+zASNGyBD3dMzrXmqFlt/fUTaL6/1zD9myHt/w+ZMKTQYw9Cy6nhDBV4W5un0LT+lIo6rf0t8+otA1by1PBlqdUA07u18L2NR86SDvy9x1ynsK+83lxSR7N0ddyZhpE6/leGQ3eBv8X/S0fknDTqehmKAxlOgRfiIgez2GBXlwXlUjQjLab4ibcfVloWWe5JAEJFKia06UZ2Lg6i4rahQZESiT1/sQrDYxtG8gh9y/zvkriN7oAXYts/LSTLKxXNbRMrhmEg0Ttfal9N3u96v+sE1BI1M0v4M6Bw==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=Brian.Woods@xxxxxxx;
  • Cc: "kevin.tian@xxxxxxxxx" <kevin.tian@xxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, "Suthikulpanit, Suravee" <Suravee.Suthikulpanit@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Woods, Brian" <Brian.Woods@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Wed, 24 Jul 2019 19:53:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVO6rZo/SR3Zk3NE+9LO3KXZNABKbaO6aA
  • Thread-topic: [PATCH v4 06/13] x86/IOMMU: don't restrict IRQ affinities to online CPUs

On Tue, Jul 16, 2019 at 07:40:57AM +0000, Jan Beulich wrote:
> In line with "x86/IRQ: desc->affinity should strictly represent the
> requested value" the internally used IRQ(s) also shouldn't be restricted
> to online ones. Make set_desc_affinity() (set_msi_affinity() then does
> by implication) cope with a NULL mask being passed (just like
> assign_irq_vector() does), and have IOMMU code pass NULL instead of
> &cpu_online_map (when, for VT-d, there's no NUMA node information
> available).
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Acked-by: Brian Woods <brian.woods@xxxxxxx>

> ---
> v4: New.
> 
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -796,18 +796,26 @@ unsigned int set_desc_affinity(struct ir
>       unsigned long flags;
>       cpumask_t dest_mask;
>   
> -    if (!cpumask_intersects(mask, &cpu_online_map))
> +    if ( mask && !cpumask_intersects(mask, &cpu_online_map) )
>           return BAD_APICID;
>   
>       spin_lock_irqsave(&vector_lock, flags);
> -    ret = _assign_irq_vector(desc, mask);
> +    ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS);
>       spin_unlock_irqrestore(&vector_lock, flags);
>   
> -    if (ret < 0)
> +    if ( ret < 0 )
>           return BAD_APICID;
>   
> -    cpumask_copy(desc->affinity, mask);
> -    cpumask_and(&dest_mask, mask, desc->arch.cpu_mask);
> +    if ( mask )
> +    {
> +        cpumask_copy(desc->affinity, mask);
> +        cpumask_and(&dest_mask, mask, desc->arch.cpu_mask);
> +    }
> +    else
> +    {
> +        cpumask_setall(desc->affinity);
> +        cpumask_copy(&dest_mask, desc->arch.cpu_mask);
> +    }
>       cpumask_and(&dest_mask, &dest_mask, &cpu_online_map);
>   
>       return cpu_mask_to_apicid(&dest_mask);
> --- a/xen/drivers/passthrough/amd/iommu_init.c
> +++ b/xen/drivers/passthrough/amd/iommu_init.c
> @@ -888,7 +888,7 @@ static void enable_iommu(struct amd_iomm
>   
>       desc = irq_to_desc(iommu->msi.irq);
>       spin_lock(&desc->lock);
> -    set_msi_affinity(desc, &cpu_online_map);
> +    set_msi_affinity(desc, NULL);
>       spin_unlock(&desc->lock);
>   
>       amd_iommu_msi_enable(iommu, IOMMU_CONTROL_ENABLED);
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -2133,11 +2133,11 @@ static void adjust_irq_affinity(struct a
>       const struct acpi_rhsa_unit *rhsa = drhd_to_rhsa(drhd);
>       unsigned int node = rhsa ? pxm_to_node(rhsa->proximity_domain)
>                                : NUMA_NO_NODE;
> -    const cpumask_t *cpumask = &cpu_online_map;
> +    const cpumask_t *cpumask = NULL;
>       struct irq_desc *desc;
>   
>       if ( node < MAX_NUMNODES && node_online(node) &&
> -         cpumask_intersects(&node_to_cpumask(node), cpumask) )
> +         cpumask_intersects(&node_to_cpumask(node), &cpu_online_map) )
>           cpumask = &node_to_cpumask(node);
>   
>       desc = irq_to_desc(drhd->iommu->msi.irq);

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.