[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 18/18] PVH xen: introduce vmx_pvh.c

>>> On 27.06.13 at 05:30, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> On Tue, 25 Jun 2013 11:49:57 +0100 "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> >>> On 25.06.13 at 02:01, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >>> wrote:
>> > +static void read_vmcs_selectors(struct cpu_user_regs *regs)
>> > +{
>> > +    regs->ss = __vmread(GUEST_SS_SELECTOR);
>> > +    regs->ds = __vmread(GUEST_DS_SELECTOR);
>> > +    regs->es = __vmread(GUEST_ES_SELECTOR);
>> > +    regs->gs = __vmread(GUEST_GS_SELECTOR);
>> > +    regs->fs = __vmread(GUEST_FS_SELECTOR);
>> > +}
>> By only conditionally reading the selector registers, how do you
>> guarantee that read_segment_register() would always read
>> valid values? I think that macro needs to not look at "regs->?s"
>> at all...
> read_segment_register() gets called for PVH only for 
> intercept. In this path, we call read all selectors before calling
> emulate_privileged_op. If someone changes code, they'd have to make sure
> of that. I can add more comments there, or go back to always read all 
> selectors
> upon vmexit, but you already made me change that.

As per my earlier reply, I think this is wrong. Both from a
conceptual POV and considering that new users of
read_segment_register() may appear in the future. You
ought to read the VMCS field in read_segment_register() if
you want to keep avoiding the saving of the selector fields
(which I strongly recommend).

>> > +static int vmxit_invalid_op(struct cpu_user_regs *regs)
>> > +{
>> > +    if ( guest_kernel_mode(current, regs)
>> > || !emulate_forced_invalid_op(regs) )
>> Did you perhaps mean !guest_kernel_mode()?
> No, pvh kernel has been changed to just do cpuid natively. Hopefully,
> over time, looong time, emulate_forced_invalid_op can just be removed.

While I don't disagree with a decision like this, the way you present
this still makes me want to comment: What you do or don't do in
Linux doesn't matter. What matters is a clear ABI description - what
are the requirements to a PVH kernel implementation, and in
particular what are the differences to a PV one? In the case here, a
requirement would now be to _not_ use the PV form of CPUID (or
more generally any operation that would result in #UD with the
expectation that the hypervisor emulates the instruction).

I don't think I've seen such a formalized list of differences, which
would make it somewhat difficult for someone else to convert their
favorite OS to support PVH too.

>> > +          ccpu, exit_reason, regs->rip, regs->rsp, regs->rflags,
>> > +          __vmread(GUEST_CR0));
>> > +
>> > +    /* For guest_kernel_mode which is called from most places
>> > below. */
>> > +    regs->cs = __vmread(GUEST_CS_SELECTOR);
>> Which raises the question of whether your uses of
>> guest_kernel_mode() are appropriate in the first place: Before this
>> series there's no use at all under xen/arch/x86/hvm/.
> HVM should do this for debug intercepts, otherwise it is wrongly 
> intercepting user level debuggers like gdb.

And why would intercepting kernel debuggers like kdb or kgdb be

> HVM can also use this check for emulating forced invalid op for only
> user levels. Since there's a cpuid intercept, and we are trying to reduce 
> pv-ops, this seems plausible.

HVM isn't supposed to be using PV CPUID. Ideally the same would
be true for PVH (i.e. it may be better to make it look to user space
like HVM, but I'm not sure if there aren't other collisions).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.