[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] xend: Add multiple cpumasks support



* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2006-08-14 17:41]:
> > > Either Keir's cpu[X] = "Y" approach or my cpu = [ "A","B","C" ]
> approach
> > > seem workable.
> > 
> > Your last email seemed to indicate to me that you didn't like using
> > quoted values in a list to separate per-vcpu cpumask values.  Maybe I
> > was mistaken.
> 
> If it's an honest python list I have no problem. Your example appeared
> to be some quoting within a string.

OK.

> My approach is a list too...
> 
> > > BTW: does the right thing happen in the face of vcpu hot plugging?
> i.e.
> > > if I unplug a vcpu and put it back in do I keep the old mask? If I
> add
> > > vcpus what mask do they get?
> > 
> > unplug events only affect a vcpu's status.  The internal struct
> > vcpu in the hypervisor is not de-allocated/re-allocated during hotplug
> > events.
> > 
> > We don't currently support a hotadd for vcpus that weren't allocated
> at
> > domain creation time.  The current method for simulating hot-add would
> > be to start a domain with 32 VCPUS and disable all by the number of
> > vcpus you currently want.  Ryan Grimm posted a patch back in February
> > that had xend do this by adding a new config option, max_vcpus, which
> > was used when calling xc_domain_max_vcpus() having the hypervisor
> alloc
> > that max number of vcpus and then using the vcpus parameter to
> determine
> > how many to bring online.
> 
> I like the idea of having a vcpus_max

I'll see if Ryan Grimm can dust that one off and resend it.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.