[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] Fix VCPU locking in sched_adjdom for multi-VCPU guests
# HG changeset patch # User ack@xxxxxxxxxxxxxxxxxxxxxxx # Node ID 85b79ab1e56df1cde42aa51071499b9c7e70163c # Parent efd7c2f3b496dcc4519c9492e2ac13f07fec92ee Fix VCPU locking in sched_adjdom for multi-VCPU guests diff -r efd7c2f3b496 -r 85b79ab1e56d xen/common/schedule.c --- a/xen/common/schedule.c Tue Jan 31 13:29:26 2006 +++ b/xen/common/schedule.c Tue Jan 31 15:24:16 2006 @@ -305,7 +305,7 @@ long sched_adjdom(struct sched_adjdom_cmd *cmd) { struct domain *d; - struct vcpu *v; + struct vcpu *v, *vme; if ( (cmd->sched_id != ops.sched_id) || ((cmd->direction != SCHED_INFO_PUT) && @@ -319,24 +319,37 @@ /* * Most VCPUs we can simply pause. If we are adjusting this VCPU then * we acquire the local schedule_lock to guard against concurrent updates. + * + * We only acquire the local schedule lock after we have paused all other + * VCPUs in this domain. There are two reasons for this: + * 1- We don't want to hold up interrupts as pausing a VCPU can + * trigger a tlb shootdown. + * 2- Pausing other VCPUs involves briefly locking the schedule + * lock of the CPU they are running on. This CPU could be the + * same as ours. */ + vme = NULL; + for_each_vcpu ( d, v ) { if ( v == current ) - vcpu_schedule_lock_irq(v); + vme = current; else vcpu_pause(v); } + if (vme) + vcpu_schedule_lock_irq(vme); + SCHED_OP(adjdom, d, cmd); - TRACE_1D(TRC_SCHED_ADJDOM, d->domain_id); + if (vme) + vcpu_schedule_unlock_irq(vme); + for_each_vcpu ( d, v ) { - if ( v == current ) - vcpu_schedule_unlock_irq(v); - else + if ( v != vme ) vcpu_unpause(v); } _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |