[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] ioreq: cope with server disappearing while I/O is pending
From: Paul Durrant <pdurrant@xxxxxxxxxx> Currently, in the event of an ioreq server being destroyed while I/O is pending in the attached emulator, it is possible that hvm_wait_for_io() will dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's shared page after it has been freed. This will only occur if the emulator (which is necessarily running in a service domain with some degree of privilege) does not complete pending I/O during tear-down and is not directly exploitable by a guest domain. This patch adds a call to get_pending_vcpu() into the condition of the wait_on_xen_event_channel() macro to verify the continued existence of the ioreq server. Should it disappear, the guest domain will be crashed. NOTE: take the opportunity to modify the text on one gdprintk() for consistency with others. Reported-by: Julien Grall <julien@xxxxxxx> Signed-off-by: Paul Durrant <pdurrant@xxxxxxxxxx> --- Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: "Roger Pau Monné" <roger.pau@xxxxxxxxxx> Cc: Wei Liu <wl@xxxxxxx> --- xen/arch/x86/hvm/ioreq.c | 30 ++++++++++++++++++++++-------- 1 file changed, 22 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 1cc27df87f..e8b97cd30c 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -115,6 +115,7 @@ bool hvm_io_pending(struct vcpu *v) static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) { + struct vcpu *v = sv->vcpu; unsigned int prev_state = STATE_IOREQ_NONE; unsigned int state = p->state; uint64_t data = ~0; @@ -132,7 +133,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n", prev_state, state); sv->pending = false; - domain_crash(sv->vcpu->domain); + domain_crash(v->domain); return false; /* bail */ } @@ -145,23 +146,36 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p) case STATE_IOREQ_READY: /* IOREQ_{READY,INPROCESS} -> IORESP_READY */ case STATE_IOREQ_INPROCESS: - wait_on_xen_event_channel(sv->ioreq_evtchn, - ({ state = p->state; - smp_rmb(); - state != prev_state; })); + /* + * NOTE: The ioreq server may have been destroyed whilst the + * vcpu was blocked so re-acquire the pointer to + * hvm_ioreq_vcpu to check this condition. + */ + wait_on_xen_event_channel( + sv->ioreq_evtchn, + ({ sv = get_pending_vcpu(v, NULL); + state = sv ? p->state : STATE_IOREQ_NONE; + smp_rmb(); + state != prev_state; })); + if ( !sv ) + { + gdprintk(XENLOG_ERR, "HVM ioreq server has disappeared\n"); + domain_crash(v->domain); + return false; /* bail */ + } continue; default: - gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state); + gdprintk(XENLOG_ERR, "Weird HVM ioreq state %u\n", state); sv->pending = false; - domain_crash(sv->vcpu->domain); + domain_crash(v->domain); return false; /* bail */ } break; } - p = &sv->vcpu->arch.hvm.hvm_io.io_req; + p = &v->arch.hvm.hvm_io.io_req; if ( hvm_ioreq_needs_completion(p) ) p->data = data; -- 2.20.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |