[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen staging] xen/sched: we never get into context_switch() with prev==next



commit 508bc75d9582897d0541aad64b6cdf9fa2ecdf89
Author:     Dario Faggioli <dfaggioli@xxxxxxxx>
AuthorDate: Sat Apr 20 17:24:47 2019 +0200
Commit:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CommitDate: Mon May 13 10:35:37 2019 +0100

    xen/sched: we never get into context_switch() with prev==next
    
    In schedule(), if we pick, as the next vcpu to run (next) the same one
    that is running already (prev), we never get to call context_switch().
    
    We can, therefore, get rid of all the `if`-s testing prev and next being
    different, trading them with an ASSERT() (on ARM, the ASSERT() was even
    already there!)
    
    Suggested-by: Juergen Gross <jgross@xxxxxxxx>
    Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx>
    Acked-by: Julien Grall <julien.grall@xxxxxxx>
    Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Andrii Anisov <andrii_anisov@xxxxxxxx>
---
 xen/arch/arm/domain.c |  3 +--
 xen/arch/x86/domain.c | 22 ++++++++--------------
 2 files changed, 9 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6dc633ed50..915ae0b4c6 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -343,8 +343,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     ASSERT(prev != next);
     ASSERT(!vcpu_cpu_dirty(next));
 
-    if ( prev != next )
-        update_runstate_area(prev);
+    update_runstate_area(prev);
 
     local_irq_disable();
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 9eaa978ce5..d2d9f2fc3c 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1721,6 +1721,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     const struct domain *prevd = prev->domain, *nextd = next->domain;
     unsigned int dirty_cpu = next->dirty_cpu;
 
+    ASSERT(prev != next);
     ASSERT(local_irq_is_enabled());
 
     get_cpu_info()->use_pv_cr3 = false;
@@ -1732,12 +1733,9 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
         flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
     }
 
-    if ( prev != next )
-    {
-        _update_runstate_area(prev);
-        vpmu_switch_from(prev);
-        np2m_schedule(NP2M_SCHEDLE_OUT);
-    }
+    _update_runstate_area(prev);
+    vpmu_switch_from(prev);
+    np2m_schedule(NP2M_SCHEDLE_OUT);
 
     if ( is_hvm_domain(prevd) && !list_empty(&prev->arch.hvm.tm_list) )
         pt_save_timer(prev);
@@ -1794,14 +1792,10 @@ void context_switch(struct vcpu *prev, struct vcpu 
*next)
 
     context_saved(prev);
 
-    if ( prev != next )
-    {
-        _update_runstate_area(next);
-
-        /* Must be done with interrupts enabled */
-        vpmu_switch_to(next);
-        np2m_schedule(NP2M_SCHEDLE_IN);
-    }
+    _update_runstate_area(next);
+    /* Must be done with interrupts enabled */
+    vpmu_switch_to(next);
+    np2m_schedule(NP2M_SCHEDLE_IN);
 
     /* Ensure that the vcpu has an up-to-date time base. */
     update_vcpu_system_time(next);
--
generated by git-patchbot for /home/xen/git/xen.git#staging

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.