[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen staging] xen/vm-event: Fix interactions with the vcpu list



commit 928f59868c9a440c85e0f158dc75a4daffe4dceb
Author:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
AuthorDate: Fri May 31 12:29:27 2019 -0700
Commit:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CommitDate: Tue Jun 4 14:43:51 2019 +0100

    xen/vm-event: Fix interactions with the vcpu list
    
    vm_event_resume() should use domain_vcpu(), rather than opencoding it
    without its Spectre v1 safety.
    
    vm_event_wake_blocked() can't ever be invoked in a case where d->vcpu is
    NULL, so drop the outer if() and reindent, fixing up style issues.
    
    The comment, which is left alone, is false.  This algorithm still has
    starvation issues when there is an asymetric rate of generated events.
    
    However, the existing logic is sufficiently complicated and fragile that
    I don't think I've followed it fully, and because we're trying to
    obsolete this interface, the safest course of action is to leave it
    alone, rather than to end up making things subtly different.
    
    Therefore, no practical change that callers would notice.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
---
 xen/common/vm_event.c | 38 ++++++++++++++++----------------------
 1 file changed, 16 insertions(+), 22 deletions(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index dcba98cef7..72f42b408a 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -119,34 +119,29 @@ static unsigned int vm_event_ring_available(struct 
vm_event_domain *ved)
 static void vm_event_wake_blocked(struct domain *d, struct vm_event_domain 
*ved)
 {
     struct vcpu *v;
-    unsigned int avail_req = vm_event_ring_available(ved);
+    unsigned int i, j, k, avail_req = vm_event_ring_available(ved);
 
     if ( avail_req == 0 || ved->blocked == 0 )
         return;
 
     /* We remember which vcpu last woke up to avoid scanning always linearly
      * from zero and starving higher-numbered vcpus under high load */
-    if ( d->vcpu )
+    for ( i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++ )
     {
-        int i, j, k;
-
-        for (i = ved->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
-        {
-            k = i % d->max_vcpus;
-            v = d->vcpu[k];
-            if ( !v )
-                continue;
+        k = i % d->max_vcpus;
+        v = d->vcpu[k];
+        if ( !v )
+            continue;
 
-            if ( !(ved->blocked) || avail_req == 0 )
-               break;
+        if ( !ved->blocked || avail_req == 0 )
+            break;
 
-            if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
-            {
-                vcpu_unpause(v);
-                avail_req--;
-                ved->blocked--;
-                ved->last_vcpu_wake_up = k;
-            }
+        if ( test_and_clear_bit(ved->pause_flag, &v->pause_flags) )
+        {
+            vcpu_unpause(v);
+            avail_req--;
+            ved->blocked--;
+            ved->last_vcpu_wake_up = k;
         }
     }
 }
@@ -382,11 +377,10 @@ static int vm_event_resume(struct domain *d, struct 
vm_event_domain *ved)
         }
 
         /* Validate the vcpu_id in the response. */
-        if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+        v = domain_vcpu(d, rsp.vcpu_id);
+        if ( !v )
             continue;
 
-        v = d->vcpu[rsp.vcpu_id];
-
         /*
          * In some cases the response type needs extra handling, so here
          * we call the appropriate handlers.
--
generated by git-patchbot for /home/xen/git/xen.git#staging

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.