|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 08/29] libxc: introduce a xc_dom_arch for hvm-3.0-x86_32 guests
On Fri, Sep 04, 2015 at 02:08:47PM +0200, Roger Pau Monne wrote:
> This xc_dom_arch will be used in order to build HVM domains. The code is
> based on the existing xc_hvm_populate_memory and xc_hvm_populate_params
> functions.
>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> ---
> Changes since v5:
> - Set tr limit to 0x67.
> - Use "goto out" consistently in vcpu_hvm.
> - Unconditionally call free(full_ctx) before exiting vcpu_hvm.
> - Add Wei Liu Ack.
>
> Changes since v4:
> - Replace a malloc+memset with a calloc.
> - Remove a != NULL check.
> - Add Andrew Cooper Reviewed-by.
>
> Changes since v3:
> - Make sure c/s b9dbe33 is not reverted on this patch.
> - Set the initial BSP state using {get/set}hvmcontext.
> ---
> tools/libxc/include/xc_dom.h | 6 +
> tools/libxc/xc_dom_x86.c | 618
> ++++++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 613 insertions(+), 11 deletions(-)
> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> index ae8187f..f36b6f6 100644
> --- a/tools/libxc/xc_dom_x86.c
> +++ b/tools/libxc/xc_dom_x86.c
> +static int meminit_hvm(struct xc_dom_image *dom)
> +{
> + unsigned long i, vmemid, nr_pages = dom->total_pages;
> + unsigned long p2m_size;
> + unsigned long target_pages = dom->target_pages;
> + unsigned long cur_pages, cur_pfn;
> + int rc;
> + xen_capabilities_info_t caps;
> + unsigned long stat_normal_pages = 0, stat_2mb_pages = 0,
> + stat_1gb_pages = 0;
> + unsigned int memflags = 0;
> + int claim_enabled = dom->claim_enabled;
> + uint64_t total_pages;
> + xen_vmemrange_t dummy_vmemrange[2];
> + unsigned int dummy_vnode_to_pnode[1];
> + xen_vmemrange_t *vmemranges;
> + unsigned int *vnode_to_pnode;
> + unsigned int nr_vmemranges, nr_vnodes;
> + xc_interface *xch = dom->xch;
> + uint32_t domid = dom->guest_domid;
> +
> + if ( nr_pages > target_pages )
> + memflags |= XENMEMF_populate_on_demand;
> +
> + if ( dom->nr_vmemranges == 0 )
> + {
> + /* Build dummy vnode information
> + *
> + * Guest physical address space layout:
> + * [0, hole_start) [hole_start, 4G) [4G, highmem_end)
> + *
> + * Of course if there is no high memory, the second vmemrange
> + * has no effect on the actual result.
> + */
> +
> + dummy_vmemrange[0].start = 0;
> + dummy_vmemrange[0].end = dom->lowmem_end;
> + dummy_vmemrange[0].flags = 0;
> + dummy_vmemrange[0].nid = 0;
> + nr_vmemranges = 1;
> +
> + if ( dom->highmem_end > (1ULL << 32) )
> + {
> + dummy_vmemrange[1].start = 1ULL << 32;
> + dummy_vmemrange[1].end = dom->highmem_end;
> + dummy_vmemrange[1].flags = 0;
> + dummy_vmemrange[1].nid = 0;
> +
> + nr_vmemranges++;
> + }
> +
> + dummy_vnode_to_pnode[0] = XC_NUMA_NO_NODE;
> + nr_vnodes = 1;
> + vmemranges = dummy_vmemrange;
> + vnode_to_pnode = dummy_vnode_to_pnode;
> + }
> + else
> + {
> + if ( nr_pages > target_pages )
> + {
> + DOMPRINTF("Cannot enable vNUMA and PoD at the same time");
> + goto error_out;
> + }
> +
> + nr_vmemranges = dom->nr_vmemranges;
> + nr_vnodes = dom->nr_vnodes;
> + vmemranges = dom->vmemranges;
> + vnode_to_pnode = dom->vnode_to_pnode;
> + }
> +
> + total_pages = 0;
> + p2m_size = 0;
> + for ( i = 0; i < nr_vmemranges; i++ )
> + {
> + total_pages += ((vmemranges[i].end - vmemranges[i].start)
> + >> PAGE_SHIFT);
> + p2m_size = p2m_size > (vmemranges[i].end >> PAGE_SHIFT) ?
> + p2m_size : (vmemranges[i].end >> PAGE_SHIFT);
> + }
> +
> + if ( total_pages != nr_pages )
> + {
> + DOMPRINTF("vNUMA memory pages mismatch (0x%"PRIx64" != 0x%"PRIx64")",
nr_pages is unsigned long, so would need to be print with %lx.
> + total_pages, nr_pages);
> + goto error_out;
> + }
> +
--
Anthony PERARD
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |