[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [V9 PATCH 7/8] pvh dom0: check for vioapic null ptr in vioapic_range
On Tue, 06 May 2014 08:44:25 +0100 "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > >>> On 06.05.14 at 02:19, <mukesh.rathor@xxxxxxxxxx> wrote: > > 1. We can go with the patch that adds pvh_mmio_handlers in > > anticipation of future msix, hpet support: > > > > +static const struct hvm_mmio_handler *const > > +pvh_mmio_handlers[HVM_MMIO_HANDLER_NR] = > > ... > > > > That would naturally return X86EMUL_UNHANDLEABLE in this case. > > > > OUTCOME: guest would be injected with GPF, but xen won't crash > > it. > > That's pretty odd a result from a nested paging fault, but perhaps > acceptable/reasonable for PVH. Of course such should never happen > for HVM. And I wonder whether a virtualization exception (vector 20) > wouldn't be more natural here (delivered directly via hardware if > support for this is available and there isn't anything preventing the > delivery). > > > 2. We could go with vioapic null ptr check in vioapic_range() > > itself, thus causing handle_mmio to return X86EMUL_UNHANDLEABLE > > also. > > > > OUTCOME: guest would be injected with GPF, but xen won't crash > > it. > > > > > > 3. Add pvh check above in hvm_hap_nested_page_fault: > > > > put_gfn(p2m->domain, gfn); > > if ( is_pvh_vcpu(v) ) > > { > > rc = 0; > > goto out; > > } > > if ( !handle_mmio() ) <========== > > hvm_inject_hw_exception(TRAP_gp_fault, 0); > > ... > > > > OUTCOME: xen would crash guest with the nice ept violation > > message. > > > > > > 4. Add check for pvh in handle_mmio: > > > > int handle_mmio(void) > > { > > int rc; > > > > if ( is_pvh_vcpu(current) ) > > return X86EMUL_UNHANDLEABLE; > > ... > > > > OUTCOME: guest would be injected with GPF, but xen won't crash > > it. > > > > > > 5. Do 1/2/4 above, but in addition, in hvm_hap_nested_page_fault() > > do: > > > > if ( (p2mt == p2m_mmio_dm) || > > (access_w && (p2mt == p2m_ram_ro)) ) > > { > > put_gfn(p2m->domain, gfn); > > rc = 1; > > if ( !handle_mmio() ) <========== > > { > > if ( is_pvh_vcpu ) > > rc = 0; > > else > > hvm_inject_hw_exception(TRAP_gp_fault, 0); > > } > > goto out; > > } > > > > OUTCOME: xen would crash guest with the nice ept violation > > message that prints all the details. > > > > Please lmk your thoughts and preference, and I'll submit patch. The > > patch would not be part of dom0 pvh series, as a pvh domU ept > > violation could cause this xen panic too in vioapic_range. We may > > wanna backport it too (not sure because of experimental nature of > > the feature). > > So my first preference would be #VE to be delivered to the guest. If > such can't be delivered, I guess killing the guest is more natural > than injecting #GP - we may want/need to make support for #VE a > requirement for PVH guests. #VE support doesn't appear to be there in guest right now. So, let me not get too distracted, we've other higher prirority fixmes, not to mention other supports like amd, migration, 32bit etc I'd like to address right after this series goes in. > Alternatively, whether handle_mmio() itself filters out PVH, or > whether its callers take care to not call it in that case largely > depends on what would produce cleaner code - i.e. if all but one of > the code paths already avoid calling the function, I would think the > remaining one should do so too unless moving the filter into the > function would allow cleaning up the other call sites. > > That not withstanding I think we will want/need pvh_mmio_handlers[] > at some point anyway - if that's going to be multiplexed inside > handle_mmio(), then preventing the function to be called would be the > wrong thing now (as that would then later need to be undone, and > hence would better get done right from the beginning). > > What I'm definitely against - as said before - is option 2. Ok, so looks like you are ok with #3. We can add pvh_mmio_handlers when adding the msix support so it gets done properly as it needs some work on guest side too from what I recall. #3 would be easy to backport also. thanks Mukesh _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |