[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v3][PATCH 03/16] xen/vtd: create RMRR mapping



On 2015/6/11 22:07, Tim Deegan wrote:
At 17:31 +0800 on 11 Jun (1434043916), Chen, Tiejun wrote:
       while ( base_pfn < end_pfn )
       {
-        int err = intel_iommu_map_page(d, base_pfn, base_pfn,
-                                       IOMMUF_readable|IOMMUF_writable);
+        int err = set_identity_p2m_entry(d, base_pfn, p2m_access_rw);

           if ( err )
               return err;

Tim has another comment to replace earlier unmap with

Yes, I knew this.

guest_physmap_remove_page() which will call iommu
unmap internally. Please include this change too.


But,

guest_physmap_remove_page()
      |
      + p2m_remove_page()
        |
        + iommu_unmap_page()
        |
        + p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), xxx)

I think this already remove these pages both on ept/vt-d sides, right?

Yes; this is about this code further up in the same function:

            while ( base_pfn < end_pfn )
            {
                if ( intel_iommu_unmap_page(d, base_pfn) )
                    ret = -ENXIO;
                base_pfn++;
            }

which ought to be calling guest_physmap_remove_page() or similar, to
make sure that both iommu and EPT mappings get removed.


I still just think current implementation might be fine at this point.

We have two scenarios here, the case of shared ept and the case of non-shared ept. But no matter what case we're tracking, shouldn't guest_physmap_remove_page() always call p2m->set_entry() to clear *all* *valid* mfn which is owned by a given VM? And p2m->set_entry() also calls iommu_unmap_page() internally. So nothing special should further consider.

If I'm wrong or misunderstanding, please correct me :)

Thanks
Tiejun


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.