[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/pvhvm: Support more than 32 VCPUs when migrating (v3).



On Thu, Nov 12, 2015 at 04:40:06PM +0000, Ian Campbell wrote:
> On Fri, 2015-07-10 at 14:57 -0400, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jul 10, 2015 at 02:37:46PM -0400, Konrad Rzeszutek Wilk wrote:
> > > When Xen migrates an HVM guest, by default its shared_info can
> > > only hold up to 32 CPUs. As such the hypercall
> > > VCPUOP_register_vcpu_info was introduced which allowed us to
> > > setup per-page areas for VCPUs. This means we can boot PVHVM
> > > guest with more than 32 VCPUs. During migration the per-cpu
> > > structure is allocated freshly by the hypervisor (vcpu_info_mfn
> > > is set to INVALID_MFN) so that the newly migrated guest
> > > can make an VCPUOP_register_vcpu_info hypercall.
> > > 
> > > Unfortunatly we end up triggering this condition in Xen:
> > > /* Run this command on yourself or on other offline VCPUS. */
> > >  if ( (v != current) && !test_bit(_VPF_down, &v->pause_flags) )
> > > 
> > > which means we are unable to setup the per-cpu VCPU structures
> > > for running vCPUS. The Linux PV code paths make this work by
> > > iterating over every vCPU with:
> > > 
> > >  1) is target CPU up (VCPUOP_is_up hypercall?)
> > >  2) if yes, then VCPUOP_down to pause it.
> > >  3) VCPUOP_register_vcpu_info
> > >  4) if it was down, then VCPUOP_up to bring it back up
> > > 
> > > But since VCPUOP_down, VCPUOP_is_up, and VCPUOP_up are
> > > not allowed on HVM guests we can't do this. However with the
> > > Xen git commit f80c5623a126afc31e6bb9382268d579f0324a7a
> > > ("xen/x86: allow HVM guests to use hypercalls to bring up vCPUs"")
> > 
> > <sigh> I was in my local tree which was Roger's 'hvm_without_dm_v3'
> > looking at patches and spotted this - and thought it was already in!
> > 
> > Sorry about this patch - and please ignore it until the VCPU_op*
> > can be used by HVM guests.
> 
> FYI I just tripped over this while implementing ARM save/restore (in that I
> couldn't figure out HTF HVM VCPUs > MAX_LEGACY_VCPUS were getting their
> vcpu_info re-registered, which turns out to be because they aren't...).
> 
> ARM also lack the VCPU_up/down/is_up hypercalls. My plan there is simply to
> use on_each_cpu to do it, I can get away with this on ARM because the
> necessary infra (IPIs etc) are provided by the h/w virt platform (i.e. look
> native) so there is no reliance on Xen infra being fully up.
> 
> Not sure if that is also true of x86/PVHVM but thought I would mention it
> in case it seemed preferable to you.

Yes, but we have a hard-limit of 32 CPUs on 'HVM' guests and on the
shared_info structure. Hence to go above that we need to use the VCPU_op calls.

> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.