[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/pci: Correct ECS handling with CF8/CFC emulation
On Tue, Apr 04, 2023 at 02:27:36PM +0100, Andrew Cooper wrote: > On 03/04/2023 2:26 pm, Roger Pau Monné wrote: > > On Mon, Apr 03, 2023 at 11:16:52AM +0100, Andrew Cooper wrote: > >> On 03/04/2023 9:57 am, Roger Pau Monné wrote: > >> (Quick tangent... Our PCI handling is currently very dumb. > >> pci_mmcfg_read() returns its value by pointer but the callers never > >> check. Swapping it to return by value would improve code gen quite a > >> lot. Also, when MMCFG is active we still pass BCS accesses to IO ports.) > > I wonder if it's really preferred to access registers below 255 using > > the IO ports, as Linux seems to do the same (prefer IO port access if > > possible). > > And see how many attempts there have been to change this, only blocked > on untangling the IO port mess on other architectures (a problem Xen > doesn't have to contend with). > > MMCFG, when available is strictly preferable to IO ports. > > An MMCFG access is a single UC read or write, whereas IO ports are a > pair of UC accesses *and* a global spinlock. Right, I know it's better from a performance PoV, but didn't know whether there was some known glitches for not using MMCFG when accessing regs < 255. > >>>> > >>>> #define IS_SNB_GFX(id) (id == 0x01068086 || id == 0x01168086 \ > >>>> || id == 0x01268086 || id == 0x01028086 \ > >>>> diff --git a/xen/arch/x86/pv/emul-priv-op.c > >>>> b/xen/arch/x86/pv/emul-priv-op.c > >>>> index 5da00e24e4ff..008367195c78 100644 > >>>> --- a/xen/arch/x86/pv/emul-priv-op.c > >>>> +++ b/xen/arch/x86/pv/emul-priv-op.c > >>>> @@ -245,19 +245,7 @@ static bool pci_cfg_ok(struct domain *currd, > >>>> unsigned int start, > >>>> if ( ro_map && test_bit(machine_bdf, ro_map) ) > >>>> return false; > >>>> } > >>>> - start |= CF8_ADDR_LO(currd->arch.pci_cf8); > >>>> - /* AMD extended configuration space access? */ > >>>> - if ( CF8_ADDR_HI(currd->arch.pci_cf8) && > >>>> - boot_cpu_data.x86_vendor == X86_VENDOR_AMD && > >>>> - boot_cpu_data.x86 >= 0x10 && boot_cpu_data.x86 < 0x17 ) > >>>> - { > >>>> - uint64_t msr_val; > >>>> - > >>>> - if ( rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) ) > >>>> - return false; > >>>> - if ( msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT) ) > >>>> - start |= CF8_ADDR_HI(currd->arch.pci_cf8); > >>>> - } > >>>> + start |= CF8_REG(currd->arch.pci_cf8); > >>>> > >>>> return !write ? > >>>> xsm_pci_config_permission(XSM_HOOK, currd, machine_bdf, > >>>> @@ -1104,6 +1092,11 @@ static int cf_check write_msr( > >>>> if ( !is_hwdom_pinned_vcpu(curr) ) > >>>> return X86EMUL_OKAY; > >>>> if ( (rdmsr_safe(MSR_AMD64_NB_CFG, temp) != 0) || > >>>> + /* > >>>> + * TODO: this is broken. What happens when dom0 is pinned > >>>> but > >>>> + * can't see the full system? CF8_EXT probably ought to > >>>> be a > >>>> + * Xen-owned setting, and made symmetric across the system. > >>>> + */ > >>> I would assume CF8_EXT would be symmetric across the system, specially > >>> given that it seems to be phased out and only used in older AMD > >>> families that where all symmetric? > >> The CF8_EXT bit has been phased out. The IO ECS functionality still > >> exists. > >> > >> But yes, the more I think about letting dom0 play with this, the more I > >> think it is a fundamentally broken idea... I bet it was from the very > >> early AMD Fam10h days where dom0 knew how to turn it on, and Xen was > >> trying to pretend it didn't have to touch any PCI devices. > > It seems to me Xen should set CF8_EXT on all threads (when available) > > and expose it to dom0, so that accesses using pci_{conf,write}_read() > > work as expected? > > It's per northbridge in the system, not per thread. Hence needing the > spinlock protecting the CF8/CFC IO port pair access, and why MMCFG is > strictly preferable. So just setting CF8_EXT_ENABLE on MSR_AMD64_NB_CFG by the BSP should be enough to have it enabled? I expect all other threads will see the bit as set in the MSR then. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |