[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen-unstable] x86/hvm/pmtimer: improving scalability of virtual time update
# HG changeset patch # User Keir Fraser <keir@xxxxxxx> # Date 1290014897 0 # Node ID fcb5b09babc0b101df4d6138d3c59dde244f8aa1 # Parent c1b7aae86cf51c7a80925d050a3b7b3ef08e7cc2 x86/hvm/pmtimer: improving scalability of virtual time update Mitigate the heavy contention on handle_pmt_io when running a HVM configured with many cores (e.g., 32 cores). As the virtual time of a domain must be fresh, there should be someone updating it, periodically. But it is not necessary to let a VCPU update the virtual time when another one has been updating it. Thus the update can be skipped when the VCPU finds someone else is updating the virtual time. So every time a VCPU invoke handle_pmt_io to update the current domain's virtual time, it will first try to acquire the pmtimer lock. If it succeeds, it will update the virtual time. Otherwise, it can skip the update, waits for the pmtimer lock holder to finish updating the virtual time and returns the updated time. Signed-off-by: Xiang Song <xiangsong@xxxxxxxxxxxx> Signed-off-by: Keir Fraser <keir@xxxxxxx> --- xen/arch/x86/hvm/pmtimer.c | 34 +++++++++++++++++++++++++--------- 1 files changed, 25 insertions(+), 9 deletions(-) diff -r c1b7aae86cf5 -r fcb5b09babc0 xen/arch/x86/hvm/pmtimer.c --- a/xen/arch/x86/hvm/pmtimer.c Wed Nov 17 16:42:37 2010 +0000 +++ b/xen/arch/x86/hvm/pmtimer.c Wed Nov 17 17:28:17 2010 +0000 @@ -88,7 +88,7 @@ static void pmt_update_time(PMTState *s) static void pmt_update_time(PMTState *s) { uint64_t curr_gtime, tmp; - uint32_t msb = s->pm.tmr_val & TMR_VAL_MSB; + uint32_t tmr_val = s->pm.tmr_val, msb = tmr_val & TMR_VAL_MSB; ASSERT(spin_is_locked(&s->lock)); @@ -96,12 +96,15 @@ static void pmt_update_time(PMTState *s) curr_gtime = hvm_get_guest_time(s->vcpu); tmp = ((curr_gtime - s->last_gtime) * s->scale) + s->not_accounted; s->not_accounted = (uint32_t)tmp; - s->pm.tmr_val += tmp >> 32; - s->pm.tmr_val &= TMR_VAL_MASK; + tmr_val += tmp >> 32; + tmr_val &= TMR_VAL_MASK; s->last_gtime = curr_gtime; - + + /* Update timer value atomically wrt lock-free reads in handle_pmt_io(). */ + *(volatile uint32_t *)&s->pm.tmr_val = tmr_val; + /* If the counter's MSB has changed, set the status bit */ - if ( (s->pm.tmr_val & TMR_VAL_MSB) != msb ) + if ( (tmr_val & TMR_VAL_MSB) != msb ) { s->pm.pm1a_sts |= TMR_STS; pmt_update_sci(s); @@ -215,10 +218,23 @@ static int handle_pmt_io( if ( dir == IOREQ_READ ) { - spin_lock(&s->lock); - pmt_update_time(s); - *val = s->pm.tmr_val; - spin_unlock(&s->lock); + if ( spin_trylock(&s->lock) ) + { + /* We hold the lock: update timer value and return it. */ + pmt_update_time(s); + *val = s->pm.tmr_val; + spin_unlock(&s->lock); + } + else + { + /* + * Someone else is updating the timer: rather than do the work + * again ourselves, wait for them to finish and then steal their + * updated value with a lock-free atomic read. + */ + spin_barrier(&s->lock); + *val = *(volatile uint32_t *)&s->pm.tmr_val; + } return X86EMUL_OKAY; } _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |