[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 18/18] PVH xen: introduce vmx_pvh.c
On Fri, 28 Jun 2013 10:31:53 +0100 "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > >>> On 28.06.13 at 03:35, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> > >>> wrote: > > On Tue, 25 Jun 2013 11:49:57 +0100 > > "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > > > >> >>> On 25.06.13 at 02:01, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> > >> >>> wrote: > >> > --- /dev/null ... + regs->cs = __vmread(GUEST_CS_SELECTOR); > >> > >> Which raises the question of whether your uses of > >> guest_kernel_mode() are appropriate in the first place: Before this > >> series there's no use at all under xen/arch/x86/hvm/. > >> > >> And if it is, I'd like to point out once again that this check > >> should be looking at SS.DPL, not CS.RPL. > > > > Are you suggesting changing the macro to check for SS.DPL instead of > > CS.RPL it has always done for PV also? Note, PVH has checks in this > > patch to enforce long mode execution always, so CS.RPL should always > > be valid for PVH. > > I'm saying that guest_kernel_mode() should be looking at the > VMCS for PVH (and, should it happen to be used in HVM code > paths, for HVM too) rather than struct cpu_user_regs. That > makes the saving of the CS selector pointless (in line with how > HVM behaves), and once you're going through > hvm_get_segment_register(), you can as well do this properly > (i.e. look at SS.DPL rather than CS.RPL). And no, repeatedly > comparing segment register handling with PV is bogus: In the PV > case we just don't have the luxury of accessible hidden register > portions, i.e. we need to get away with looking at selectors only. Just for my knowledge, why can't we read the GDT entry in the PV case to get the hidden fields, since we have access to both the GDT base and the selector? thanks Mukesh _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |