[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 2/4] x86/apic: force phys mode if interrupt remapping is disabled
On Thu, Dec 05, 2019 at 10:32:34AM +0100, Jan Beulich wrote: > On 04.12.2019 17:20, Roger Pau Monne wrote: > > Cluster mode can only be used with interrupt remapping support, since > > the top 16bits of the APIC ID are filled with the cluster ID, and > > hence on systems where the physical ID is still smaller than 255 the > > cluster ID is not. Force x2APIC to use physical mode if there's no > > interrupt remapping support. > > > > Note that this requires a further patch in order to enable x2APIC > > without interrupt remapping support. > > > > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> > > Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> > albeit ... > > > --- a/xen/arch/x86/genapic/x2apic.c > > +++ b/xen/arch/x86/genapic/x2apic.c > > @@ -226,7 +226,23 @@ boolean_param("x2apic_phys", x2apic_phys); > > const struct genapic *__init apic_x2apic_probe(void) > > { > > if ( x2apic_phys < 0 ) > > - x2apic_phys = !!(acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL); > > + { > > + if ( !iommu_intremap ) > > + /* > > + * Force physical mode if there's no interrupt remapping > > support: > > + * the ID in clustered mode requires a 32 bit destination > > field due > > + * to the usage of the high 16 bits to store the cluster ID. > > + */ > > + x2apic_phys = true; > > + else > > + x2apic_phys = !!(acpi_gbl_FADT.flags & > > ACPI_FADT_APIC_PHYSICAL); > > ... I wonder why you didn't make this > > x2apic_phys = !iommu_intremap || (acpi_gbl_FADT.flags & > ACPI_FADT_APIC_PHYSICAL); > > (not the least because of allowing to drop the somewhat ugly !!). Feel free to do it at commit (and reindent the comment), or else I can resend a new version if that's too intrusive. Thanks, Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |