[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] After upgrade to Xen 4.12.0 iommu=no-igfx



On Mon, Jul 22, 2019 at 04:03:44PM +0200, Paul Durrant wrote:
> > -----Original Message-----
> [snip]
> > > > diff --git a/xen/drivers/passthrough/iommu.c 
> > > > b/xen/drivers/passthrough/iommu.c
> > > > index 79ec6719f5..9d91f0d633 100644
> > > > --- a/xen/drivers/passthrough/iommu.c
> > > > +++ b/xen/drivers/passthrough/iommu.c
> > > > @@ -185,7 +185,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
> > > >      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m 
> > > > table", 0);
> > > >
> > > >      hd->status = IOMMU_STATUS_initializing;
> > > > -    hd->need_sync = iommu_hwdom_strict && !iommu_use_hap_pt(d);
> > > > +    hd->need_sync = !iommu_use_hap_pt(d);
> > >
> > > But this is going to mean the if() below is true for non-strict dom0, 
> > > which means it pointlessly
> > maps the dom0 page list when hwdom_iommu_map() should have already mapped 
> > all conventional RAM.
> > 
> > Right, this all seems quite broken. Non-translated guests (ie: PV)
> > would always need iommu page-table sync, but set_identity_p2m_entry
> > contains the following:
> > 
> > if ( !paging_mode_translate(p2m->domain) )
> > {
> >     if ( !need_iommu_pt_sync(d) )
> >         return 0;
> >     return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
> >                             IOMMUF_readable | IOMMUF_writable);
> > }
> > 
> > I wonder whether the usage of need_iommu_pt_sync is wrong there, and
> > should instead be !has_iommu_pt(d), since non-translated domains would
> > never share the iommu page-tables anyway.
> 
> You want syncing if the domain has IOMMU page tables and shared EPT is not in 
> use, so this logic just seems wrong.

Right, so for a PV domain this would mean syncing if the iommu is in
used, because there's no sharing at all.

> > 
> > In any case, I think need_sync must be set when the iommu page tables
> > are not shared, and gating it on iommu_hwdom_strict seems wrong to me,
> > the strictness of the iommu doesn't affect whether a sync is need or
> > not.
> 
> I think need_sync is gated on strict mode because, in 'relaxed' mode, the 
> mappings that are set up when dom0 is started are supposed to be static 
> (modulo hotplug RAM)... so modifications to the domain's page list are not 
> supposed to have any effect, and so no synchronization need be done.

Hm, right, in relaxed mode dom0 is supposed to have all RAM regions
mapped in the iommu page-tables, so there's no need to modify the
iommu page tables at run time.

This approach seems slightly broken if dma operations agains mmio
regions are attempted by dom0, but anyway, this seems to have worked
fine so far.

> > 
> > I've updated the patch to avoid the pointless mapping of dom0 page
> > list, but I haven't included the change to set_identity_p2m_entry.
> > 
> 
> So, I think the albeit odd looking logic in iommu_hwdom_init() was actually 
> correct, but the code in set_identity_p2m_entry() is wrong.

Ack, see the patch below. I think it might also be helpful to add a
comment to the setting of need_sync in iommu_hwdom_init in order to
mention that relaxed domains don't need sync because all ram is
already mapped in the iommu page tables.

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index fef97c82f6..88a2430c8c 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -836,7 +836,7 @@ guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t 
mfn,
          */
         for ( i = 0; i < (1UL << page_order); ++i, ++page )
         {
-            if ( !need_iommu_pt_sync(d) )
+            if ( !has_iommu_pt(d) )
                 /* nothing */;
             else if ( get_page_and_type(page, d, PGT_writable_page) )
                 put_page_and_type(page);
@@ -1341,7 +1341,7 @@ int set_identity_p2m_entry(struct domain *d, unsigned 
long gfn_l,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
-        if ( !need_iommu_pt_sync(d) )
+        if ( !has_iommu_pt(d) )
             return 0;
         return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
                                 IOMMUF_readable | IOMMUF_writable);
@@ -1432,7 +1432,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned 
long gfn_l)
 
     if ( !paging_mode_translate(d) )
     {
-        if ( !need_iommu_pt_sync(d) )
+        if ( !has_iommu_pt(d) )
             return 0;
         return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K);
     }


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.