[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 03/13] x86: maintain COS to CBM mapping for each socket



On Thu, May 28, 2015 at 02:17:54PM +0100, Jan Beulich wrote:
> >>> On 21.05.15 at 10:41, <chao.p.peng@xxxxxxxxxxxxxxx> wrote:
> > For each socket, a COS to CBM mapping structure is maintained for each
> > COS. The mapping is indexed by COS and the value is the corresponding
> > CBM. Different VMs may use the same CBM, a reference count is used to
> > indicate if the CBM is available.
> > 
> > Signed-off-by: Chao Peng <chao.p.peng@xxxxxxxxxxxxxxx>
> > Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > ---
> > Changes in v8:
> > * Move the memory allocation and CAT initialization code to CPU_UP_PREPARE.
> > * Add memory freeing code in CPU_DEAD path.
> 
> Changes like this imo invalidate any tags given for earlier versions.

Sure, I will remove it.

> > +static int cat_cpu_init(unsigned int cpu)
> > +{
> > +    int rc;
> > +    const struct cpuinfo_x86 *c = cpu_data + cpu;
> > +
> > +    if ( !cpu_has(c, X86_FEATURE_CAT) )
> > +        return 0;
> > +
> > +    if ( test_bit(cpu_to_socket(cpu), cat_socket_enable) )
> > +        return 0;
> > +
> > +    if ( cpu == smp_processor_id() )
> > +        do_cat_cpu_init(&rc);
> > +    else
> > +        on_selected_cpus(cpumask_of(cpu), do_cat_cpu_init, &rc, 1);
> 
> This now being called in the context of CPU_UP_PREPARE, I can't see
> how this works at all: Neither would the CPU's cpu_data[] instance be
> initialized by that time, nor would you be able to IPI that CPU, nor can I
> see how the if() branch could ever get entered. Was this tested at all?

Ah, yes! So it sounds really a little difficult to move the memory
allocation from CPU_STARTING to CPU_PREPARA for this case.

Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.