[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/mm: do not mark IO regions as Xen heap


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 17 Sep 2020 16:28:49 +0200
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Thu, 17 Sep 2020 14:29:25 +0000
  • Ironport-sdr: uyTy4Ega7RpvpClUaZu0STPgW2IKt2rH7SXsWL8Vn2VZ7Iss8jND6cQqjw4jxdM7kFTkM0HyMp 1C9jm8UOGU9ho3n9BQ1C/NmFRrSpbAsp85LEioabkE4qOzXqybTZ8/bPdHtJln6s//VdGADMoK /tPbPZgzYzsANCSfG/ZvJY2A2k91YMcc6/q3ZwiU1Ej3u9/OHxDTmPn4e8ukEfv3ww0wiG8tpf qtSAaHuI2fgAhfhrBY9/AngYfm9T6nBURdUPNXp3buSSvM87FMyM/6XLHgOb5ve8F0Ybxg0b98 E4o=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Sep 17, 2020 at 04:12:23PM +0200, Jan Beulich wrote:
> On 10.09.2020 15:35, Roger Pau Monne wrote:
> > arch_init_memory will treat all the gaps on the physical memory map
> > between RAM regions as MMIO and use share_xen_page_with_guest in order
> > to assign them to dom_io. This has the side effect of setting the Xen
> > heap flag on such pages, and thus is_special_page would then return
> > true which is an issue in epte_get_entry_emt because such pages will
> > be forced to use write-back cache attributes.
> > 
> > Fix this by introducing a new helper to assign the MMIO regions to
> > dom_io without setting the Xen heap flag on the pages, so that
> > is_special_page will return false and the pages won't be forced to use
> > write-back cache attributes.
> > 
> > Fixes: 81fd0d3ca4b2cd ('x86/hvm: simplify 'mmio_direct' check in 
> > epte_get_entry_emt()')
> > Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> 
> I'm sorry for noticing this only now, but there is a place where
> we actually build on these pages being marked "special": In
> xenmem_add_to_physmap_one() we have
> 
>     if ( mfn_valid(prev_mfn) )
>     {
>         if ( is_special_page(mfn_to_page(prev_mfn)) )
>             /* Special pages are simply unhooked from this phys slot. */
>             rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
>         else
>             /* Normal domain memory is freed, to avoid leaking memory. */
>             rc = guest_remove_page(d, gfn_x(gpfn));
>     }
> 
> As you'll notice MMIO pages not satisfying mfn_valid() will simply
> bypass any updates here, but the subsequent guest_physmap_add_page()
> will have the P2M entry updated anyway. MMIO pages which satisfy
> mfn_valid(), however, would previously have been passed into
> guest_physmap_remove_page() (which generally would succeed) while
> now guest_remove_page() will (afaict) fail (get_page() there won't
> succeed).

Would Xen even get to the get_page in guest_remove_page on that case?

There's a p2m_mmio_direct type check that will succeed for MMIO
ranges, and that just clears the p2m entry and returns before doing
any get_page.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.