[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2] xen/vcpu: Rework sanity checks in vcpu_create()
Poisoning idle_vcpu[0] with the sanity debug value isn't actually a clever idea, because it passes a NULL pointer check but isn't a usable vcpu. It is also the reason for the (!is_idle_domain(d) || vcpu_id) part of the existing sanity BUG_ON(). Now that d->max_vcpus is appropriately set up before vcpu_create() is called, we can properly range check the requested vcpu_id. Drop the BUG_ON() and replace it with code which is runtime safe but non-fatal. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- CC: Jan Beulich <JBeulich@xxxxxxxx> CC: Wei Liu <wei.liu2@xxxxxxxxxx> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> CC: Julien Grall <julien.grall@xxxxxxx> CC: Jason Andryuk <jandryuk@xxxxxxxxx> v2: * Fix the tboot check following the un-poisioning of idle_vcpu[0] * Exclude the idle domain from the next_in_list list, and vastly simplify the linking logic. --- xen/arch/arm/setup.c | 1 - xen/arch/x86/setup.c | 1 - xen/arch/x86/tboot.c | 2 +- xen/common/domain.c | 28 ++++++++++++++++++---------- 4 files changed, 19 insertions(+), 13 deletions(-) diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index ea2495a..a80032c 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -704,7 +704,6 @@ void __init start_xen(unsigned long boot_phys_offset, set_processor_id(0); /* needed early, for smp_processor_id() */ set_current((struct vcpu *)0xfffff000); /* debug sanity */ - idle_vcpu[0] = current; setup_virtual_regions(NULL, NULL); /* Initialize traps early allow us to get backtrace when an error occurred */ diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c index 2fbf7d5..2081592 100644 --- a/xen/arch/x86/setup.c +++ b/xen/arch/x86/setup.c @@ -691,7 +691,6 @@ void __init noreturn __start_xen(unsigned long mbi_p) set_processor_id(0); set_current(INVALID_VCPU); /* debug sanity. */ - idle_vcpu[0] = current; init_shadow_spec_ctrl_state(); percpu_init_areas(); diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index f3fdee4..fff00bf 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -395,7 +395,7 @@ void tboot_shutdown(uint32_t shutdown_type) * During early boot, we can be called by panic before idle_vcpu[0] is * setup, but in that case we don't need to change page tables. */ - if ( idle_vcpu[0] != INVALID_VCPU ) + if ( idle_vcpu[0] ) write_ptbase(idle_vcpu[0]); ((void(*)(void))(unsigned long)g_tboot_shared->shutdown_entry)(); diff --git a/xen/common/domain.c b/xen/common/domain.c index 66054b3..a0d950c 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -138,7 +138,21 @@ struct vcpu *vcpu_create( { struct vcpu *v; - BUG_ON((!is_idle_domain(d) || vcpu_id) && d->vcpu[vcpu_id]); + /* + * Sanity check some input expectations: + * - d->max_vcpus and d->vcpu[] should be set up + * - vcpu_id should be bounded by d->max_vcpus + * - Vcpus should be tightly packed and allocated in ascending order + * (except for the idle domain). + * - No previous vcpu with this id should be allocated + */ + if ( !d->max_vcpus || !d->vcpu || vcpu_id >= d->max_vcpus || + (!is_idle_domain(d) && vcpu_id && !d->vcpu[vcpu_id - 1]) || + d->vcpu[vcpu_id] ) + { + ASSERT_UNREACHABLE(); + return NULL; + } if ( (v = alloc_vcpu_struct()) == NULL ) return NULL; @@ -178,16 +192,10 @@ struct vcpu *vcpu_create( if ( arch_vcpu_create(v) != 0 ) goto fail_sched; + /* Insert the vcpu into the domain's vcpu list. */ d->vcpu[vcpu_id] = v; - if ( vcpu_id != 0 ) - { - int prev_id = v->vcpu_id - 1; - while ( (prev_id >= 0) && (d->vcpu[prev_id] == NULL) ) - prev_id--; - BUG_ON(prev_id < 0); - v->next_in_list = d->vcpu[prev_id]->next_in_list; - d->vcpu[prev_id]->next_in_list = v; - } + if ( !is_idle_domain(d) && vcpu_id > 0 ) + d->vcpu[vcpu_id - 1]->next_in_list = v; /* Must be called after making new vcpu visible to for_each_vcpu(). */ vcpu_check_shutdown(v); -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |