[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote: > While already the case for PVH, there's no reason to treat PV > differently here, though of course the addresses get taken from another > source in this case. Except that, to match CPU side mappings, by default > we permit r/o ones. This then also means we now deal consistently with > IO-APICs whose MMIO is or is not covered by E820 reserved regions. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Just one comment below. > --- > v5: Extend to also cover e.g. HPET, which in turn means explicitly > excluding PCI MMCFG ranges. > [integrated] v1: Integrate into series. > [standalone] v2: Keep IOMMU mappings in sync with CPU ones. > > --- a/xen/drivers/passthrough/x86/iommu.c > +++ b/xen/drivers/passthrough/x86/iommu.c > @@ -13,6 +13,7 @@ > */ > > #include <xen/sched.h> > +#include <xen/iocap.h> > #include <xen/iommu.h> > #include <xen/paging.h> > #include <xen/guest_access.h> > @@ -275,12 +276,12 @@ void iommu_identity_map_teardown(struct > } > } > > -static bool __hwdom_init hwdom_iommu_map(const struct domain *d, > - unsigned long pfn, > - unsigned long max_pfn) > +static unsigned int __hwdom_init hwdom_iommu_map(const struct domain *d, > + unsigned long pfn, > + unsigned long max_pfn) > { > mfn_t mfn = _mfn(pfn); > - unsigned int i, type; > + unsigned int i, type, perms = IOMMUF_readable | IOMMUF_writable; > > /* > * Set up 1:1 mapping for dom0. Default to include only conventional RAM > @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map > * that fall in unusable ranges for PV Dom0. > */ > if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) ) > - return false; > + return 0; > > switch ( type = page_get_ram_type(mfn) ) > { > case RAM_TYPE_UNUSABLE: > - return false; > + return 0; > > case RAM_TYPE_CONVENTIONAL: > if ( iommu_hwdom_strict ) > - return false; > + return 0; > break; > > default: > if ( type & RAM_TYPE_RESERVED ) > { > if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved ) > - return false; > + perms = 0; > } > - else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > > max_pfn ) > - return false; > + else if ( is_hvm_domain(d) ) > + return 0; > + else if ( !iommu_hwdom_inclusive || pfn > max_pfn ) > + perms = 0; > } > > /* Check that it doesn't overlap with the Interrupt Address Range. */ > if ( pfn >= 0xfee00 && pfn <= 0xfeeff ) > - return false; > + return 0; > /* ... or the IO-APIC */ > - for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ ) > - if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) ) > - return false; > + if ( has_vioapic(d) ) > + { > + for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ ) > + if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) ) > + return 0; > + } > + else if ( is_pv_domain(d) ) > + { > + /* > + * Be consistent with CPU mappings: Dom0 is permitted to establish > r/o > + * ones there (also for e.g. HPET in certain cases), so it should > also > + * have such established for IOMMUs. > + */ > + if ( iomem_access_permitted(d, pfn, pfn) && > + rangeset_contains_singleton(mmio_ro_ranges, pfn) ) > + perms = IOMMUF_readable; > + } > /* > * ... or the PCIe MCFG regions. > * TODO: runtime added MMCFG regions are not checked to make sure they > * don't overlap with already mapped regions, thus preventing trapping. > */ > if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) > - return false; > + return 0; > + else if ( is_pv_domain(d) ) > + { > + /* > + * Don't extend consistency with CPU mappings to PCI MMCFG regions. > + * These shouldn't be accessed via DMA by devices. Could you expand the comment a bit to explicitly mention the reason why MMCFG regions shouldn't be accessible from device DMA operations? Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |