|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] arm: allocate top level p2m page for all non-idle VCPUs
BTW this depends on Stefano's "arm: shared_info page allocation and
mapping". I'm happy to hold on to it until then but thought I'd send for
review now...
On Thu, 2012-03-15 at 12:01 +0000, Ian Campbell wrote:
> Not just dom0.
>
> Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> ---
> xen/arch/arm/domain.c | 3 +++
> xen/arch/arm/domain_build.c | 3 ---
> xen/arch/arm/p2m.c | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 5702399..4b38790 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -201,6 +201,9 @@ int arch_domain_create(struct domain *d, unsigned int
> domcr_flags)
> clear_page(d->shared_info);
> share_xen_page_with_guest(
> virt_to_page(d->shared_info), d, XENSHARE_writable);
> +
> + if ( (rc = p2m_alloc_table(d)) != 0 )
> + goto fail;
> }
>
> d->max_vcpus = 8;
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 15632f7..6687e50 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -93,9 +93,6 @@ int construct_dom0(struct domain *d)
>
> d->max_pages = ~0U;
>
> - if ( (rc = p2m_alloc_table(d)) != 0 )
> - return rc;
> -
> printk("Populate P2M %#llx->%#llx\n", kinfo.ram_start, kinfo.ram_end);
> p2m_populate_ram(d, kinfo.ram_start, kinfo.ram_end);
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 051a0e8..4f624d8 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -203,7 +203,7 @@ int p2m_alloc_table(struct domain *d)
> void *p;
>
> /* First level P2M is 2 consecutive pages */
> - page = alloc_domheap_pages(d, 1, 0);
> + page = alloc_domheap_pages(NULL, 1, 0);
> if ( page == NULL )
> return -ENOMEM;
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |