[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] After upgrade to Xen 4.12.0 iommu=no-igfx



On Tue, Aug 06, 2019 at 02:48:51PM -0700, Roman Shaposhnik wrote:
> On Tue, Aug 6, 2019 at 9:18 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> >
> > On Fri, Aug 02, 2019 at 10:05:40AM +0200, Roger Pau Monné wrote:
> > > On Thu, Aug 01, 2019 at 11:25:04AM -0700, Roman Shaposhnik wrote:
> > > > This patch completely fixes the problem for me!
> > > >
> > > > Thanks Roger! I'd love to see this in Xen 4.13
> > >
> > > Thanks for testing!
> > >
> > > It's still not clear to me why the previous approach didn't work, but
> > > I think this patch is better because it removes the usage of
> > > {set/clear}_identity_p2m_entry from PV domains. I will submit this
> > > formally now.
> >
> > Sorry to bother again, but since we still don't understand why the
> > previous fix didn't work for you, and I can't reproduce this with my
> > hardware, could you give the attached patch a try?
> 
> No worries -- and thanks for helping to get it over the finish line --
> this is much appreciated!
> 
> I'm happy to say that this latest patch is also working just fine. So
> I guess this is the one that's going to land in Xen 4.13?

No, not really, sorry this was still a debug patch.

So I think the behaviour you are seeing can only be explained if the
IOMMU is already enabled by the firmware when booting into Xen, can
this be the case?

I have a patch I would like you to try to confirm this, can you please
give it a spin and report back (ideally with the Xen boot log).

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index fef97c82f6..3605614aaf 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1341,7 +1341,7 @@ int set_identity_p2m_entry(struct domain *d, unsigned 
long gfn_l,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
-        if ( !need_iommu_pt_sync(d) )
+        if ( !has_iommu_pt(d) )
             return 0;
         return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
                                 IOMMUF_readable | IOMMUF_writable);
@@ -1432,7 +1432,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned 
long gfn_l)
 
     if ( !paging_mode_translate(d) )
     {
-        if ( !need_iommu_pt_sync(d) )
+        if ( !has_iommu_pt(d) )
             return 0;
         return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K);
     }
diff --git a/xen/drivers/passthrough/vtd/iommu.c 
b/xen/drivers/passthrough/vtd/iommu.c
index 5d72270c5b..9dd0ed7f63 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2316,6 +2316,9 @@ static int __init vtd_setup(void)
      */
     for_each_drhd_unit ( drhd )
     {
+        unsigned long flags;
+        uint32_t val;
+
         iommu = drhd->iommu;
 
         printk("Intel VT-d iommu %"PRIu32" supported page sizes: 4kB",
@@ -2351,6 +2354,22 @@ static int __init vtd_setup(void)
         if ( !vtd_ept_page_compatible(iommu) )
             iommu_hap_pt_share = 0;
 
+        spin_lock_irqsave(&iommu->register_lock, flags);
+        val = dmar_readl(iommu->reg, DMAR_GSTS_REG);
+        /*
+         * TODO: needs to be revisited once Xen supports booting with an
+         * already enabled IOMMU.
+         */
+        if ( val & DMA_GSTS_TES )
+        {
+            printk(XENLOG_WARNING VTDPREFIX
+                   "IOMMU: DMA remapping already enabled, disabling it\n");
+            dmar_writel(iommu->reg, DMAR_GCMD_REG, val & ~DMA_GCMD_TE);
+            IOMMU_WAIT_OP(iommu, DMAR_GSTS_REG, dmar_readl,
+                          !(val & DMA_GSTS_TES), val);
+        }
+        spin_unlock_irqrestore(&iommu->register_lock, flags);
+
         ret = iommu_set_interrupt(drhd);
         if ( ret )
         {


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.