[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory



On Wed, Apr 18, 2018 at 02:53:03AM -0600, Jan Beulich wrote:
>>>> On 06.12.17 at 08:50, <chao.gao@xxxxxxxxx> wrote:
>> Each vcpu of hvm guest consumes at least one shadow page. Currently, only 256
>> (for hap case) pages are pre-allocated as shadow memory at beginning. It 
>> would
>> run out if guest has more than 256 vcpus and guest creation fails. Bump the
>> number of shadow pages to 2 * HVM_MAX_VCPUS for hap case and 8 * 
>> HVM_MAX_VCPUS
>> for shadow case.
>> 
>> This patch won't lead to more memory consumption for the size of shadow 
>> memory
>> will be adjusted via XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION according to the 
>> size
>> of guest memory and the number of vcpus.
>
>I don't understand this: What's the purpose of bumping the values if it won't 
>lead
>to higher memory consumption? Afaict there's be higher consumption at least
>transiently. And I don't see why this would need doing independent of the 
>intended
>vCPU count in the guest. I guess you want to base your series on top on 
>Andrew's
>max-vCPU-s adjustments (which sadly didn't become ready in time for 4.11).

The situation here is some pages are pre-allocated as P2M page for domain
initialization. After vCPU creation, the total number of P2M page are
adjusted by the domctl interface. Before vCPU creation, this domctl
is unusable for the check in paging_domctl():
 if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )

When the number of a guest's vCPU is small, the pre-allocated are
enough. But it won't be if the number of vCPU is bigger than 256. Each
vCPU will at least use one P2M page when it is created, seeing
contruct_vmcs()->hap_update_paging_modes().

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.