[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 07/10] xen: remove workaround to inject evtchn_irq on irq enable



On Mon, 4 Aug 2014, Jan Beulich wrote:
> >>> On 04.08.14 at 12:02, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > On Mon, 4 Aug 2014, Jan Beulich wrote:
> >> >>> On 01.08.14 at 19:00, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> >> > On Fri, 25 Jul 2014, Jan Beulich wrote:
> >> >> but I still don't see why ARM needs what x86 (even for HVM) appears to
> >> >> get along fine without.
> >> > 
> >> > Good question.
> >> > x86 PV guests have in xen_irq_enable:
> >> > 
> >> > if (unlikely(vcpu->evtchn_upcall_pending))
> >> >         xen_force_evtchn_callback();
> >> > 
> >> > Also xen_irq_enable_direct calls check_events.
> >> > 
> >> > I suspect that PV on HVM guests that get events via gsi interrupts get
> >> > away without it because they only call VCPUOP_register_vcpu_info on
> >> > secondary cpus.
> >> 
> >> But my pointer was specifically to pure HVM guests (among all the
> >> other possible kinds)...
> > 
> > Pure HVM guests don't map the vcpu info struct so they don't have this
> > problem. After all they don't have event channels.
> > More advanced forms of PV on HVM guests have vector callbacks that need
> > no emulation at the vioapic/vlapic level.
> > So that leaves us with old style PV on HVM guests, that receive event
> > notifications via legacy interrupts, but only to the first vcpu. This is
> > the case I am talking about.
> > 
> > Am I missing anything?
> 
> No, you're right. So coming back to your suspicion above: Nothing
> prevents a HVM guest to also call VCPUOP_register_vcpu_info on
> the boot CPU (and in fact such an asymmetry would seem pretty
> odd); old-style HVM guests with PV drivers (built from
> unmodified_drivers/) don't call VCPUOP_register_vcpu_info at all.
> But in the end if what you say is true there would be a case where
> x86 is also broken, just that there doesn't appear to be a kernel
> utilizing this case. Since especially for HVM guests we shouldn't be
> making assumptions in the hypervisor on guest behavior, shouldn't
> your patch at least try to address that case then at once?

The most logical thing to do would be to implement arch_evtchn_inject on
x86 as:

void arch_evtchn_inject(struct vcpu *v)
{
    if ( has_hvm_container_vcpu(v) )
        hvm_assert_evtchn_irq(v);
}

however it is very difficult to test because:
- the !xen_have_vector_callback code path doesn't work properly on a
modern Linux kernel;
- going all the way back to 2.6.37, !xen_have_vector_callback works but
then calling xen_vcpu_setup on vcpu0 doesn't work anyway. I don't know
exactly why but I don't think that the reason has anything to do with
the problem we are discussing here. In fact simply calling on vcpu0 an
hypercall that only sets evtchn_upcall_pending and then calls
arch_evtchn_inject works as espected.

I know we are not just dealing with Linux guests, but given all this I
am not sure how useful would actually be to provide the implementation
of arch_evtchn_inject on x86.  What do you think?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.