[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/mm: do not mark IO regions as Xen heap



On 10.09.2020 16:41, Jan Beulich wrote:
> On 10.09.2020 15:35, Roger Pau Monne wrote:
>> arch_init_memory will treat all the gaps on the physical memory map
>> between RAM regions as MMIO and use share_xen_page_with_guest in order
>> to assign them to dom_io. This has the side effect of setting the Xen
>> heap flag on such pages, and thus is_special_page would then return
>> true which is an issue in epte_get_entry_emt because such pages will
>> be forced to use write-back cache attributes.
>>
>> Fix this by introducing a new helper to assign the MMIO regions to
>> dom_io without setting the Xen heap flag on the pages, so that
>> is_special_page will return false and the pages won't be forced to use
>> write-back cache attributes.
>>
>> Fixes: 81fd0d3ca4b2cd ('x86/hvm: simplify 'mmio_direct' check in 
>> epte_get_entry_emt()')
>> Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
>> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> 
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> albeit I'm inclined to add, while committing, a comment ...
> 
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -271,6 +271,18 @@ static l4_pgentry_t __read_mostly split_l4e;
>>  #define root_pgt_pv_xen_slots ROOT_PAGETABLE_PV_XEN_SLOTS
>>  #endif
>>  
>> +static void __init assign_io_page(struct page_info *page)
>> +{
>> +    set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), INVALID_M2P_ENTRY);
>> +
>> +    /* The incremented type count pins as writable. */
>> +    page->u.inuse.type_info = PGT_writable_page | PGT_validated | 1;
>> +
>> +    page_set_owner(page, dom_io);
>> +
>> +    page->count_info |= PGC_allocated | 1;
>> +}
> 
> ... clarifying its relationship with share_xen_page_with_guest().

I'm also going to add an assertion to share_xen_page_with_guest() to
document and make sure dom_io won't again be passed there.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.