[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 03/13] VMX: implement suppress #VE.



>From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>Sent: Thursday, July 09, 2015 6:01 AM
>
>>>> On 01.07.15 at 20:09, <edmund.h.white@xxxxxxxxx> wrote:
>> @@ -232,6 +235,15 @@ static int ept_set_middle_entry(struct p2m_domain
>*p2m, ept_entry_t *ept_entry)
>>      /* Manually set A bit to avoid overhead of MMU having to write it later.
>*/
>>      ept_entry->a = 1;
>>
>> +    ept_entry->suppress_ve = 1;
>> +
>> +    table = __map_domain_page(pg);
>> +
>> +    for ( i = 0; i < EPT_PAGETABLE_ENTRIES; i++ )
>> +        table[i].suppress_ve = 1;
>> +
>> +    unmap_domain_page(table);
>
>For the moment I can certainly agree to it being done this way, but it's

On this one I'm hoping you are ok with the way the code is structured now.

>inefficient and should be cleaned up: There shouldn't be two mappings of the
>page being allocated (one in hap_alloc() and the other being added here). I
>suppose the easiest would be to pass an optional callback pointer to
>p2m_alloc_ptp(). Or, to also cover the case below in ept_p2m_init() (i.e.
>p2m_alloc_table()) a new optional hook in struct p2m_domain could be added
>for that purpose.
>Albeit ...
>
>> @@ -1134,6 +1151,13 @@ int ept_p2m_init(struct p2m_domain *p2m)
>>          p2m->flush_hardware_cached_dirty = ept_flush_pml_buffers;
>>      }
>>
>> +    table =
>> + map_domain_page(pagetable_get_pfn(p2m_get_pagetable(p2m)));
>> +
>> +    for ( i = 0; i < EPT_PAGETABLE_ENTRIES; i++ )
>> +        table[i].suppress_ve = 1;
>> +
>> +    unmap_domain_page(table);
>
>... why is this needed? Bit 63 is documented to be ignored in PML4Es (just like
>in all other intermediate page tables).

Valid point - this has no negative side-effects per se so we didn't change this.

Ravi

>
>Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.