[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/3] VMX: allocate VMCS pages from domain heap
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: Tuesday, October 20, 2015 6:36 PM > > >>> On 20.10.15 at 12:12, <andrew.cooper3@xxxxxxxxxx> wrote: > > On 19/10/15 16:22, Jan Beulich wrote: > >> -static struct vmcs_struct *vmx_alloc_vmcs(void) > >> +static paddr_t vmx_alloc_vmcs(void) > >> { > >> + struct page_info *pg; > >> struct vmcs_struct *vmcs; > >> > >> - if ( (vmcs = alloc_xenheap_page()) == NULL ) > >> + if ( (pg = alloc_domheap_page(NULL, 0)) == NULL ) > > > > As an observation, it would be good to pass v from the caller, and NUMA > > allocate against v->domain here. > > Yes, in another patch. which 'another patch'? suppose not PATCH 3/3 since I didn't' see related change there. > > >> @@ -580,7 +583,7 @@ int vmx_cpu_up_prepare(unsigned int cpu) > >> void vmx_cpu_dead(unsigned int cpu) > >> { > >> vmx_free_vmcs(per_cpu(vmxon_region, cpu)); > >> - per_cpu(vmxon_region, cpu) = NULL; > >> + per_cpu(vmxon_region, cpu) = 0; > > > > While this is currently safe (as pa 0 is not part of the available heap > > allocation range), might it be worth introducing a named sentential? I > > can forsee a DMLite nested Xen scenario where we definitely don't need > > to treat the low 1MB magically. > > I guess there are more things to adjust if we ever cared to recover > the few hundred kb below 1Mb. And then I don't see why nested > Xen would matter here: One major main reason for reserving that > space is that we want to put the trampoline there. Do you think > DMlite would allow us to get away without? But even if so, this > would again fall under what I've said in the first sentence. > Could you at least introduce a macro first? Regardless of how much things to adjust, this way makes future change simple. Thanks Kevin _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |