[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 08/24] x86: refactor psr: set value: implement framework.



On 17-03-14 04:24:51, Jan Beulich wrote:
> >>> On 14.03.17 at 10:21, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> > On 17-03-14 00:29:09, Jan Beulich wrote:
> >> >>> Yi Sun <yi.y.sun@xxxxxxxxxxxxxxx> 03/14/17 3:42 AM >>>
> >> >There are three scenarios. E.g.
> >> >1. User calls domctl interface on Dom0 to set a COS ID 1 for Dom1 into its
> >> >psr_cos_ids[]. Then, Dom1 is scheduled so that 'psr_ctxt_switch_to()' is
> >> >called which makes COS ID 1 work. For this case, we do not any action.
> >> >
> >> >2. Dom1 runs on CPU 1 and COS ID 1 is working. At same time, user calls 
> > domctl
> >> >interface on Dom0 to set a new COS ID 2 for Dom1 into psr_cos_ids[]. After
> >> >time slice ends, the Dom1 is scheduled again, the new COS ID 2 will work.
> >> >For this case, we don't need any action either.
> >> 
> >> And that's because of? I'd think the domctl caller can expect the new COS 
> >> ID
> >> to take effect immediately for all vCPU-s of the target domain.
> >> 
> >> >3. When a new COS ID is being set to psr_cos_ids[], 'psr_ctxt_switch_to()'
> >> >is called to access the same psr_cos_ids[] member through 'psr_assoc_cos'.
> >> >The COS ID is constrained by cos_mask so that it cannot exceeds the 
> >> >cos_max.
> >> >So even the COS ID got here is wrong, it is still a workable ID (within
> >> >cos_max). The functionality is still workable but of course the COS ID may
> >> >not be the one that user intends to use.
> >> >
> >> >If you think scenario 3 is not acceptable, I suggest to add read write 
> >> >lock 
> > as
> >> >below. How do you think? Thanks!
> >> >
> >> >static void psr_assoc_cos()
> >> >{
> >> >read_lock(&rwlock);
> >> >*reg = (*reg & ~cos_mask) |
> >> >(((uint64_t)cos << PSR_ASSOC_REG_SHIFT) & cos_mask);
> >> >read_unlock(&rwlock);
> >> >}
> >> >
> >> >int psr_set_val()
> >> >{
> >> >......
> >> >write_lock(&rwlock);
> >> >d->arch.psr_cos_ids[socket] = cos;
> >> >write_unlock(&rwlock);
> >> >......
> >> >}
> >> 
> >> I don't see how that would help. The domain could then still use a stale 
> >> COS
> >> ID. I see only two valid approaches: Either you pause the domain during the
> >> update, or you IPI CPUs running vCPU-s of that domain to reload their MSRs.
> >> The latter could become tricky afaict ...
> >> 
> > For IPI solution, could you please help to check if below codes can work?
> 
> Well, I did say "tricky", didn't I? What you suggest still leaves a
> time window, as the result of cpupool_domain_cpumask() may
> change behind your back. The simplest (and least tricky) solution
> would be to IPI all CPUs, and check whether any adjustment is
> needed inside the handler. However, you'd then need to make
> sure there's no context switch related race, similar to the VMCS
> one I've referred you to as example.
> 
Sorry, I may not fully understand your meaning. My thoughts are below.
1. We set 'd->arch.psr_cos_ids[socket] = cos;' in 'psr_set_val';

2. After that, we get valid cpumask through cpupool_domain_cpumask();

3. If the actual valid cpumask changed after that, the new cpu is valid so
   that the context switch happens. Then 'psr_ctxt_switch_to' is called to
   update the new cpu's ASSOC register with the new COS ID which has been
   set in step 1.

4. Send IPI to all cpus on cpumask got in step 2. They will update their
   ASSOC registers according to their domains psr_cos_ids[].

So I think this flow can cover all cpus which ASSOC registers need be updated.

What is your idea? Thanks!

BRs,
Sun Yi

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.