[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 3/3] xen/vcpu: Rework sanity checks in vcpu_create()



Poisoning idle_vcpu[0] with the sanity debug value isn't actually a clever
idea, because it passes a NULL pointer check but isn't a usable vcpu.  It is
also the reason for the (!is_idle_domain(d) || vcpu_id) part of the existing
sanity BUG_ON().

Now that d->max_vcpus is appropriately set up before vcpu_create() is called,
we can properly range check the requested vcpu_id.  Drop the BUG_ON() and
replace it with code which is runtime safe but non-fatal.

While v0 must be the first allocated vcpu for for_each_vcpu() to work, it
isn't a requirement for the threading the vcpu into the linked list, so update
the threading code to be more generic, and add a comment explaining why we
need to search for prev_id.

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
CC: Jan Beulich <JBeulich@xxxxxxxx>
CC: Wei Liu <wei.liu2@xxxxxxxxxx>
CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
CC: Julien Grall <julien.grall@xxxxxxx>
---
 xen/arch/arm/setup.c |  1 -
 xen/arch/x86/setup.c |  1 -
 xen/common/domain.c  | 32 ++++++++++++++++++++++++++++----
 3 files changed, 28 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 01aaaab..d06ac40 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -704,7 +704,6 @@ void __init start_xen(unsigned long boot_phys_offset,
     set_processor_id(0); /* needed early, for smp_processor_id() */
 
     set_current((struct vcpu *)0xfffff000); /* debug sanity */
-    idle_vcpu[0] = current;
 
     setup_virtual_regions(NULL, NULL);
     /* Initialize traps early allow us to get backtrace when an error occurred 
*/
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index a2f22a1..5e1e8ae 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -691,7 +691,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     set_processor_id(0);
     set_current(INVALID_VCPU); /* debug sanity. */
-    idle_vcpu[0] = current;
     init_shadow_spec_ctrl_state();
 
     percpu_init_areas();
diff --git a/xen/common/domain.c b/xen/common/domain.c
index a9df589..d23b54a 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -138,7 +138,19 @@ struct vcpu *vcpu_create(
 {
     struct vcpu *v;
 
-    BUG_ON((!is_idle_domain(d) || vcpu_id) && d->vcpu[vcpu_id]);
+    /*
+     * Sanity check some input expectations:
+     *  - d->max_vcpus and d->vcpu[] should be set up
+     *  - vcpu_id should be bounded by d->max_vcpus
+     *  - v0 must be the first-allocated vcpu
+     *  - No previous vcpu with this id should be allocated
+     */
+    if ( !d->max_vcpus || !d->vcpu || vcpu_id >= d->max_vcpus ||
+         (vcpu_id > 0 && !d->vcpu[0]) || d->vcpu[vcpu_id] )
+    {
+        ASSERT_UNREACHABLE();
+        return NULL;
+    }
 
     if ( (v = alloc_vcpu_struct()) == NULL )
         return NULL;
@@ -178,15 +190,27 @@ struct vcpu *vcpu_create(
     if ( arch_vcpu_create(v) != 0 )
         goto fail_sched;
 
+    /* Insert the vcpu into the domain's vcpu list. */
     d->vcpu[vcpu_id] = v;
     if ( vcpu_id != 0 )
     {
         int prev_id = v->vcpu_id - 1;
+
+        /*
+         * Look for the previously allocated vcpu, and splice into the
+         * next_in_list single linked list.
+         *
+         * All domains other than IDLE have tightly packed vcpu_id's.  IDLE
+         * vcpu_id's are derived from hardware CPU id's and can be sparse.
+         */
         while ( (prev_id >= 0) && (d->vcpu[prev_id] == NULL) )
             prev_id--;
-        BUG_ON(prev_id < 0);
-        v->next_in_list = d->vcpu[prev_id]->next_in_list;
-        d->vcpu[prev_id]->next_in_list = v;
+
+        if ( prev_id >= 0 )
+        {
+            v->next_in_list = d->vcpu[prev_id]->next_in_list;
+            d->vcpu[prev_id]->next_in_list = v;
+        }
     }
 
     /* Must be called after making new vcpu visible to for_each_vcpu(). */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.