[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] [PATCH] [RESEND] remove redundent call to hvm_do_resume
>> Remove redundant call to hvm_do_resume. >> In this patch IO event channels in qemu are used for IO done >> notification only, so we can simplify the event checking handling in >> hvm_do_resume. >> Besides current intr notification from qemu is complete meanless, >> because eithor it's done with an IO event, or acturally do >thing since >> the new xen consumed event channel, we will find a better way to >> implement that functionality. > >I like avoiding hvm_do_resume() on every vm entry, so I took >that part of >the patch. The change to ioemu was odd -- there are other >places that set >send_event to 1 (the code that adds IRQs to the PIC IRR). We >know that code >is somewhat broken for SMP guests, but you *do* need to notify Xen when >interrupts happen, right? > Acturally for the current code, it's not necessary to notify xen when interrupts happen in Qemu dm: When interrupts happen in Qemu dm, we set PIC state in IO shared page, then just before next vmentry, vmx_intr_assist will automatically distribute and inject the interrupt. There are 3 cases we call hvm_do_resume: 1) an IO done event notification, and it's possible that an interupt event notification comes with it. 2) schedule in after a schedule out for using up its time slice. 3) an interupt event notification, if this vcpu is in an IO transaction, then it gets blocked again and this case will become case 1 at last. So the only case we need care about is the current vcpu is not in an IO transaction and an interrupt event happens, In my mind we should call vcpu_kick to the target vcpu, but the current code has no idea of this. That's to say it's no use to notify xen for current code. But even we kick the target vcpu, we may be wrong, because the target vcpu may not be real target vcpu the interrupt should be delivered to, as you've already noticed. Ideally, we should take IOAPIC and PIC code out of vcpu execution context as the patch you've worked out :-) -Xin _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |