[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] xen/sched: Introduce domain_vcpu() helper
The progression of multi-vcpu support in Xen (originally a single pointer, then an embedded d->vcpu[] array, then a dynamically allocated array) has resulted in a large quantity of ad-hoc code for looking a vcpu up by id, and a large number of ways that the toolstack can cause Xen to trip over a NULL pointer. Some of this has been addressed in Xen 4.12, and work is ongoing. Another property of looking a vcpu up by id is frequently done in unprivileged hypercall context, making it an attractive target for speculative sidechannel attacks. Introduce a helper to do the lookup correctly, and without speculative interference. For performance reasons, it is useful not to have an smp_rmb() in this helper on ARM, and luckily this is safe to do, because of the serialisation offered by the global domheap lock. As a minor change noticed when checking the safety of this construct, sanity check during boot that idle->max_vcpus is a suitable upper bound for idle->vcpu[]. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- CC: Jan Beulich <JBeulich@xxxxxxxx> CC: Wei Liu <wei.liu2@xxxxxxxxxx> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> CC: Julien Grall <julien.grall@xxxxxxx> CC: Juergen Gross <jgross@xxxxxxxx> CC: Norbert Manthey <nmanthey@xxxxxxxxx> Some notes: * This patch really is from 2014, and formed my first attempt to fix the NULL dereference problem, and eventally started getting upstream as the domain creation changes. It is posted now to help with the XSA-289 series. * The reasoning about the safey of not having an smp_rmb() is only true for Xen 4.12. It does not hold for older versions of Xen. --- xen/common/schedule.c | 1 + xen/include/xen/sched.h | 26 ++++++++++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index a957c5e..fd58762 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -1815,6 +1815,7 @@ void __init scheduler_init(void) idle_domain = domain_create(DOMID_IDLE, NULL, false); BUG_ON(IS_ERR(idle_domain)); + BUG_ON(nr_cpu_ids > ARRAY_SIZE(idle_vcpu)); idle_domain->vcpu = idle_vcpu; idle_domain->max_vcpus = nr_cpu_ids; if ( vcpu_create(idle_domain, 0, 0) == NULL ) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 4956a77..a761db0 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -15,6 +15,7 @@ #include <xen/nodemask.h> #include <xen/radix-tree.h> #include <xen/multicall.h> +#include <xen/nospec.h> #include <xen/tasklet.h> #include <xen/mm.h> #include <xen/smp.h> @@ -836,6 +837,31 @@ static inline int domain_pause_by_systemcontroller_nosync(struct domain *d) void domain_pause_except_self(struct domain *d); void domain_unpause_except_self(struct domain *d); +/* + * For each allocated vcpu, d->vcpu[X]->vcpu_id == X + * + * During construction, all vcpus in d->vcpu[] are allocated sequentially, and + * in ascending order. Therefore, if d->vcpu[N] exists (e.g. derived from + * current), all vcpus with an id less than N also exist. + * + * SMP considerations: The idle domain is constructed before APs are started. + * All other domains have d->vcpu[] allocated and d->max_vcpus set before the + * domain is made visible in the domlist, which is serialised on the global + * domlist_update_lock. + * + * Therefore, all observations of d->max_vcpus vs d->vcpu[] will be consistent + * despite the lack of smp_* barriers, either by being on the same CPU as the + * one which issued the writes, or because of barrier properties of the domain + * having been inserted into the domlist. + */ +static inline struct vcpu *domain_vcpu(const struct domain *d, + unsigned int vcpu_id) +{ + unsigned int idx = array_index_nospec(vcpu_id, d->max_vcpus); + + return idx >= d->max_vcpus ? NULL : d->vcpu[idx]; +} + void cpu_init(void); struct scheduler; -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |