|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
On 31.05.2022 16:40, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
>> While already the case for PVH, there's no reason to treat PV
>> differently here, though of course the addresses get taken from another
>> source in this case. Except that, to match CPU side mappings, by default
>> we permit r/o ones. This then also means we now deal consistently with
>> IO-APICs whose MMIO is or is not covered by E820 reserved regions.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>
> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Thanks.
>> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
>> * that fall in unusable ranges for PV Dom0.
>> */
>> if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
>> - return false;
>> + return 0;
>>
>> switch ( type = page_get_ram_type(mfn) )
>> {
>> case RAM_TYPE_UNUSABLE:
>> - return false;
>> + return 0;
>>
>> case RAM_TYPE_CONVENTIONAL:
>> if ( iommu_hwdom_strict )
>> - return false;
>> + return 0;
>> break;
>>
>> default:
>> if ( type & RAM_TYPE_RESERVED )
>> {
>> if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
>> - return false;
>> + perms = 0;
>> }
>> - else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn >
>> max_pfn )
>> - return false;
>> + else if ( is_hvm_domain(d) )
>> + return 0;
>> + else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
>> + perms = 0;
>> }
>>
>> /* Check that it doesn't overlap with the Interrupt Address Range. */
>> if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
>> - return false;
>> + return 0;
>> /* ... or the IO-APIC */
>> - for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
>> - if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>> - return false;
>> + if ( has_vioapic(d) )
>> + {
>> + for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>> + if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>> + return 0;
>> + }
>> + else if ( is_pv_domain(d) )
>> + {
>> + /*
>> + * Be consistent with CPU mappings: Dom0 is permitted to establish
>> r/o
>> + * ones there (also for e.g. HPET in certain cases), so it should
>> also
>> + * have such established for IOMMUs.
>> + */
>> + if ( iomem_access_permitted(d, pfn, pfn) &&
>> + rangeset_contains_singleton(mmio_ro_ranges, pfn) )
>> + perms = IOMMUF_readable;
>> + }
>> /*
>> * ... or the PCIe MCFG regions.
With this comment (which I leave alone) ...
>> * TODO: runtime added MMCFG regions are not checked to make sure they
>> * don't overlap with already mapped regions, thus preventing trapping.
>> */
>> if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
>> - return false;
>> + return 0;
>> + else if ( is_pv_domain(d) )
>> + {
>> + /*
>> + * Don't extend consistency with CPU mappings to PCI MMCFG regions.
>> + * These shouldn't be accessed via DMA by devices.
>
> Could you expand the comment a bit to explicitly mention the reason
> why MMCFG regions shouldn't be accessible from device DMA operations?
... it's hard to tell what I should write here. I'd expect extended
reasoning to go there (if anywhere). I'd be okay adjusting the earlier
comment, if only I knew what to write. "We don't want them to be
accessed that way" seems a little blunt. I could say "Devices have
other means to access PCI config space", but this not being said there
I took as being implied. Or else what was the reason to exclude these
for PVH Dom0?
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |