[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 09/11] x86/ctxt: Issue a speculation barrier between vcpu contexts



>>> On 26.01.18 at 12:13, <dfaggioli@xxxxxxxx> wrote:
> On Fri, 2018-01-26 at 02:43 -0700, Jan Beulich wrote:
>> > > > On 26.01.18 at 02:08, <dfaggioli@xxxxxxxx> wrote:
>> > And in order to go and investigate this a bit further, Jan, what is
>> > it
>> > that you were doing when you saw what you described above? AFAIUI,
>> > that's booting an HVM guest, isn't it?
>> 
>> Yes, plus then run some arbitrary work inside it.
>> 
> Ok. And you've seen the "spurious" migrations only/mostly during boot,
> or even afterwords, while running this work?

The ratio of spurious moves was higher during guest boot, but
their total amount kept growing when the guest had some sort
of load later on - it was just that there were more "explainable"
moves then.

>> If you want, I could
>> bring the code I've used for monitoring into patch form and hand
>> it to you.
>> 
> If it's not a problem, and whenever you have time, yes, that would be
> useful, I think.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1691,6 +1691,7 @@ static void __context_switch(void)
 }
 
 
+static DEFINE_PER_CPU(const void *, last_sync);//temp
 void context_switch(struct vcpu *prev, struct vcpu *next)
 {
     unsigned int cpu = smp_processor_id();
@@ -1725,9 +1726,12 @@ void context_switch(struct vcpu *prev, s
          (is_idle_domain(nextd) && cpu_online(cpu)) )
     {
         local_irq_enable();
+per_cpu(last_sync, cpu) = NULL;//temp
     }
     else
     {
+static DEFINE_PER_CPU(const struct vcpu*, last_vcpu);//temp
+const struct vcpu*last_vcpu = per_cpu(curr_vcpu, cpu);//temp
         __context_switch();
 
         if ( is_pv_domain(nextd) &&
@@ -1750,6 +1754,31 @@ void context_switch(struct vcpu *prev, s
         }
 
         ctxt_switch_levelling(next);
+if(is_hvm_domain(nextd)) {//temp
+ static DEFINE_PER_CPU(unsigned long, good);
+ if(next != per_cpu(last_vcpu, cpu))
+  ++per_cpu(good, cpu);
+ else {
+  static DEFINE_PER_CPU(unsigned long, bad);
+  static DEFINE_PER_CPU(unsigned long, cnt);
+  static DEFINE_PER_CPU(unsigned long, thr);
+  static DEFINE_PER_CPU(domid_t, last_dom);
+  domid_t curr_dom = last_vcpu->domain->domain_id;
+  ++per_cpu(bad, cpu);
+  if(curr_dom < DOMID_FIRST_RESERVED && curr_dom > per_cpu(last_dom, cpu)) {
+   per_cpu(thr, cpu) = 0;
+   per_cpu(cnt, cpu) = 0;
+   per_cpu(last_dom, cpu) = curr_dom;
+  }
+  if(++per_cpu(cnt, cpu) > per_cpu(thr, cpu)) {
+   per_cpu(thr, cpu) |= per_cpu(cnt, cpu);
+   printk("%pv -> %pv -> %pv -> %pv %lu:%lu [%pS]\n",
+          per_cpu(last_vcpu, cpu), last_vcpu, prev, next,
+          per_cpu(good, cpu), per_cpu(bad, cpu), per_cpu(last_sync, cpu));
+  }
+ }
+}
+per_cpu(last_vcpu, cpu) = next;//temp
     }
 
     context_saved(prev);
@@ -1794,6 +1823,7 @@ int __sync_local_execstate(void)
     if ( switch_required )
     {
         ASSERT(current == idle_vcpu[smp_processor_id()]);
+this_cpu(last_sync) = __builtin_return_address(0);//temp
         __context_switch();
     }
 
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.