[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v1 02/13] Set VCPU's is_running flag closer to when the VCPU is dispatched



An interrupt handler happening during new VCPU scheduling may want to know
who was on the (physical) processor at the point of the interrupt. Just
looking at 'current' may not be accurate since there is a window of time when
'current' points to new VCPU and its is_running flag is set but the VCPU has
not been dispatched yet. More importantly, on Intel processors, if the handler
wants to examine certain state of an HVM VCPU (such as segment registers) the
VMCS pointer is not set yet.

This patch will move setting the is_running flag to a later point.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
---
 xen/arch/x86/domain.c             |  1 +
 xen/arch/x86/hvm/svm/entry.S      |  2 ++
 xen/arch/x86/hvm/vmx/entry.S      |  1 +
 xen/arch/x86/x86_64/asm-offsets.c |  1 +
 xen/common/schedule.c             | 10 ++++++++--
 5 files changed, 13 insertions(+), 2 deletions(-)

I am not particularly happy about changes to common/schedule.c. I could define
an arch-specific macro in an include file but I don't see a good place to do
this. Perhaps someone could suggest a better solution.

Or maybe the ifdef is not needed at all (it was added in case something breaks
on ARM).

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 874742c..e119d7b 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -142,6 +142,7 @@ static void continue_nonidle_domain(struct vcpu *v)
 {
     check_wakeup_from_wait();
     mark_regs_dirty(guest_cpu_user_regs());
+    v->is_running = 1;
     reset_stack_and_jump(ret_from_intr);
 }
 
diff --git a/xen/arch/x86/hvm/svm/entry.S b/xen/arch/x86/hvm/svm/entry.S
index 1969629..728e773 100644
--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -74,6 +74,8 @@ UNLIKELY_END(svm_trace)
 
         mov  VCPU_svm_vmcb_pa(%rbx),%rax
 
+        movb $1,VCPU_is_running(%rbx)
+
         pop  %r15
         pop  %r14
         pop  %r13
diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
index 496a62c..9e33f45 100644
--- a/xen/arch/x86/hvm/vmx/entry.S
+++ b/xen/arch/x86/hvm/vmx/entry.S
@@ -125,6 +125,7 @@ UNLIKELY_END(realmode)
         mov  $GUEST_RFLAGS,%eax
         VMWRITE(UREGS_eflags)
 
+        movb $1,VCPU_is_running(%rbx)
         cmpb $0,VCPU_vmx_launched(%rbx)
         pop  %r15
         pop  %r14
diff --git a/xen/arch/x86/x86_64/asm-offsets.c 
b/xen/arch/x86/x86_64/asm-offsets.c
index b0098b3..9fa06c0 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -86,6 +86,7 @@ void __dummy__(void)
     OFFSET(VCPU_kernel_sp, struct vcpu, arch.pv_vcpu.kernel_sp);
     OFFSET(VCPU_kernel_ss, struct vcpu, arch.pv_vcpu.kernel_ss);
     OFFSET(VCPU_guest_context_flags, struct vcpu, arch.vgc_flags);
+    OFFSET(VCPU_is_running, struct vcpu, is_running);
     OFFSET(VCPU_nmi_pending, struct vcpu, nmi_pending);
     OFFSET(VCPU_mce_pending, struct vcpu, mce_pending);
     OFFSET(VCPU_nmi_old_mask, struct vcpu, nmi_state.old_mask);
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index a8398bd..af3edbc 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1219,8 +1219,14 @@ static void schedule(void)
      * switch, else lost_records resume will not work properly.
      */
 
-    ASSERT(!next->is_running);
-    next->is_running = 1;
+#ifdef CONFIG_X86
+    if ( is_idle_vcpu(next) )
+    /* On x86 guests will set is_running right before they start running. */
+#endif
+    {
+        ASSERT(!next->is_running);
+        next->is_running = 1;
+    }
 
     pcpu_schedule_unlock_irq(cpu);
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.