[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] arm: use a per-VCPU stack



At 08:44 +0000 on 19 Feb (1329641061), Ian Campbell wrote:
> > Storing the CPU ID in the per-pcpu area only happens to work because
> > per-cpu areas are a noop right now.  I have a patch that re-enables them
> > properly but for that we'll need a proper way of getting the CPU id.
> 
> I had imagined that we would have per pVCPU page tables so the current
> CPU's per-pcpu area would always be at the same location. If that is not
> (going to be) the case then I'll stash it on the VCPU stack instead.

Yes I'd thought that too but then when I came to implement it up...

> Thinking about it now playing tricks with the PTs does make it tricky on
> the rare occasions when you want to access another pCPUs per area.

... I saw that and since I then had to use the normal relocation tricks
I didn't bother with the local-var special case.  Could still do it if
it turns out to be a perf win (but w/out hardware to measure, I think
I'll leave the optimizations alone for now).

> Speaking of per-cpu areas -- I did notice a strange behaviour while
> debugging this. It seemed that a barrier() was not sufficient to keep
> the processor from caching the value of "current" in a register (i.e it
> would load into r6 before the barrier and use r6 after). I figured this
> was probably an unfortunate side effect of the current nobbled per-pcpu
> areas and would be fixed as part of your SMP bringup stuff.

Wierd.  Must check that when I rebase the SMP patches.

> > We could use the physical CPU ID register; I don't know whether it
> > would be faster to stash the ID on the (per-vcpu) stack and update it
> > during context switch.
> 
> Does h/w CPU ID correspond to the s/w one in our circumstances? Might
> they be very sparse or something inconvenient like that?

It does on all the h/w we support :) but yes it could be sparse,
encoding NUMA topology.

> I'd expect pulling things from registers to be faster in the normal case
> but in this specific scenario I'd imagine the base of the stack will be
> pretty cache hot since it has all the guest state in it etc which we've
> probably fairly recently pushed to or are about to pop from.

Agreed.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.