|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH V2 1/4] xen/hap: Increase hap page pool size for more vcpus support
On 31/08/17 06:01, Lan Tianyu wrote:
> This patch is to increase hap page pool size to support more vcpus in single
> VM.
>
> Signed-off-by: Lan Tianyu <tianyu.lan@xxxxxxxxx>
> ---
> xen/arch/x86/mm/hap/hap.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index cdc77a9..96a7ed0 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -464,6 +464,7 @@ void hap_domain_init(struct domain *d)
> int hap_enable(struct domain *d, u32 mode)
> {
> unsigned int old_pages;
> + unsigned int pages;
> unsigned int i;
> int rv = 0;
>
> @@ -473,7 +474,14 @@ int hap_enable(struct domain *d, u32 mode)
> if ( old_pages == 0 )
> {
> paging_lock(d);
> - rv = hap_set_allocation(d, 256, NULL);
> +
> + /* Increase hap page pool with max vcpu number. */
> + if ( d->max_vcpus > 128 )
> + pages = 256;
> + else
> + pages = 512;
> +
> + rv = hap_set_allocation(d, pages, NULL);
What effect is this intended to have? hap_enable() is always called
when d->max_vcpus is 0.
d->max_vcpus isn't chosen until a subsequent hypercall. (This is one of
many unexpected surprised from multi-vcpu support having been hacked on
the side of existing Xen support, rather than being built in to the
createdomain hypercall).
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |