[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/iommu: remove usage of {set/clear}_identity_p2m_entry against PV domains



On Mon, Aug 12, 2019 at 02:17:53PM +0100, George Dunlap wrote:
> On 8/2/19 10:22 AM, Roger Pau Monne wrote:
> > Switch rmrr_identity_mapping to use iommu_{un}map in order to
> > establish RMRR mappings for PV domains, like it's done in
> > arch_iommu_hwdom_init. This solves the issue of a PV hardware domain
> > not getting RMRR mappings because {set/clear}_identity_p2m_entry was
> > not properly updating the iommu page tables.
> 
> Sorry, I think this description is somewhat backwards: you're saying
> what you're doing first, and then saying what the problematic behavior
> was, but not actually saying what was causing the problematic behavior.
> 
> Why was {set,clear}_identity_p2m not updating the page tables?
> 
> I agree with Jan, that figuring that out is a prerequisite for any kind
> of fix: if `need_iommu_pt_sync()` is false at that point for the
> hardware domain, then there's a bigger problem than RMRRs not being
> adjusted properly.

need_iommu_pt_sync is indeed false for a PV hardware domain not
running in strict mode, see:

http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/drivers/passthrough/iommu.c;h=f8c3bf53bd1793df93d7ddea6564dc929f40c156;hb=HEAD#l192
https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01547.html

This is fine for a non-strict PV hardware domain, since it has all the
host memory (minus memory used by Xen) mapped in the iommu page tables
except for RMRR regions not marked as reserved in the e820 memory map,
which are added using set_identity_p2m_entry.

The issue here is that this patch alone doesn't fix the issue for the
reporter, and there seems to be an additional flush required in order
to get the issue solved on his end:

https://lists.xenproject.org/archives/html/xen-devel/2019-08/msg00526.html

My theory is that the system Roman is using is booting with DMA
remapping enabled in the iommu, and somehow the call to
iommu_flush_all in intel_iommu_hwdom_init doesn't seem to work
properly, while calling iommu_iotlb_flush_all does seem to do the
right thing. I'm still waiting for Roman to come back with the result
of my last debug patches in order to figure out whether my hypothesis
above is true.

This however won't still explain the weird behaviour of
iommu_flush_all, and further debugging will still be required.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.