[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/upcall: inject a spurious event after setting upcall vector



On Tue, Jan 02, 2018 at 09:47:40AM -0700, Jan Beulich wrote:
> >>> On 28.12.17 at 13:57, <roger.pau@xxxxxxxxxx> wrote:
> > In case the vCPU has pending events to inject. This fixes a bug that
> > happened if the guest mapped the vcpu info area using
> > VCPUOP_register_vcpu_info without having setup the event channel
> > upcall, and then setup the upcall vector.
> > 
> > In this scenario the guest would not receive any upcalls, because the
> > call to VCPUOP_register_vcpu_info would have marked the vCPU as having
> > pending events, but the vector could not be injected because it was
> > not yet setup.
> > 
> > This has not caused issues so far because all the consumers first
> > setup the vector callback and then map the vcpu info page, but there's
> > no limitation that prevents doing it in the inverse order.
> 
> Hmm, yes, okay, I can see that we may indeed want to do this for
> symmetry reasons. There is a small theoretical risk of this causing
> races, though, for not entirely well written guest drivers.

Not exactly. In the scenario described above not injecting this event
will cause further events to not be injected also. This is because
VCPUOP_register_vcpu_info sets evtchn_upcall_pending and tries to
inject an event using arch_evtchn_inject. If the vector is not set at
this point, arch_evtchn_inject will do nothing, but
evtchn_upcall_pending will be left set.

Further calls to vcpu_mark_events_pending will not call into
hvm_assert_evtchn_irq because the pending bit is already set, thus
preventing the delivery of any event channel interrupts.

> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -4069,6 +4069,7 @@ static int hvmop_set_evtchn_upcall_vector(
> >      printk(XENLOG_G_INFO "%pv: upcall vector %02x\n", v, op.vector);
> >  
> >      v->arch.hvm_vcpu.evtchn_upcall_vector = op.vector;
> > +    arch_evtchn_inject(v);
> 
> Why go through the arch hook instead of calling
> hvm_assert_evtchn_irq() directly?
> 
> > --- a/xen/arch/x86/hvm/irq.c
> > +++ b/xen/arch/x86/hvm/irq.c
> > @@ -385,6 +385,7 @@ void hvm_set_callback_via(struct domain *d, uint64_t 
> > via)
> >      struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> >      unsigned int gsi=0, pdev=0, pintx=0;
> >      uint8_t via_type;
> > +    struct vcpu *v;
> >  
> >      via_type = (uint8_t)MASK_EXTR(via, HVM_PARAM_CALLBACK_IRQ_TYPE_MASK) + 
> > 1;
> >      if ( ((via_type == HVMIRQ_callback_gsi) && (via == 0)) ||
> > @@ -447,6 +448,9 @@ void hvm_set_callback_via(struct domain *d, uint64_t 
> > via)
> >  
> >      spin_unlock(&d->arch.hvm_domain.irq_lock);
> >  
> > +    for_each_vcpu(d, v)
> > +        arch_evtchn_inject(v);
> 
> Wouldn't it make sense to limit this to actually active vCPU-s?

Since it's not a hot-path I didn't bother, but I guess using
is_vcpu_online should be fine?

Will switch to hvm_assert_evtchn_irq for both of the above, since this
is x86/hvm code already.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.