[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/mm: do not mark IO regions as Xen heap



On 17.09.2020 16:28, Roger Pau Monné wrote:
> On Thu, Sep 17, 2020 at 04:12:23PM +0200, Jan Beulich wrote:
>> On 10.09.2020 15:35, Roger Pau Monne wrote:
>>> arch_init_memory will treat all the gaps on the physical memory map
>>> between RAM regions as MMIO and use share_xen_page_with_guest in order
>>> to assign them to dom_io. This has the side effect of setting the Xen
>>> heap flag on such pages, and thus is_special_page would then return
>>> true which is an issue in epte_get_entry_emt because such pages will
>>> be forced to use write-back cache attributes.
>>>
>>> Fix this by introducing a new helper to assign the MMIO regions to
>>> dom_io without setting the Xen heap flag on the pages, so that
>>> is_special_page will return false and the pages won't be forced to use
>>> write-back cache attributes.
>>>
>>> Fixes: 81fd0d3ca4b2cd ('x86/hvm: simplify 'mmio_direct' check in 
>>> epte_get_entry_emt()')
>>> Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
>>> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>
>> I'm sorry for noticing this only now, but there is a place where
>> we actually build on these pages being marked "special": In
>> xenmem_add_to_physmap_one() we have
>>
>>     if ( mfn_valid(prev_mfn) )
>>     {
>>         if ( is_special_page(mfn_to_page(prev_mfn)) )
>>             /* Special pages are simply unhooked from this phys slot. */
>>             rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
>>         else
>>             /* Normal domain memory is freed, to avoid leaking memory. */
>>             rc = guest_remove_page(d, gfn_x(gpfn));
>>     }
>>
>> As you'll notice MMIO pages not satisfying mfn_valid() will simply
>> bypass any updates here, but the subsequent guest_physmap_add_page()
>> will have the P2M entry updated anyway. MMIO pages which satisfy
>> mfn_valid(), however, would previously have been passed into
>> guest_physmap_remove_page() (which generally would succeed) while
>> now guest_remove_page() will (afaict) fail (get_page() there won't
>> succeed).
> 
> Would Xen even get to the get_page in guest_remove_page on that case?
> 
> There's a p2m_mmio_direct type check that will succeed for MMIO
> ranges, and that just clears the p2m entry and returns before doing
> any get_page.

Oh, I did overlook this indeed.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.