[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 09/18] PVH xen: Support privileged op emulation for PVH

>>> On 28.06.13 at 01:43, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> On Thu, 27 Jun 2013 08:22:42 +0100
> "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> >>> On 27.06.13 at 00:41, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >>> wrote:
>> > On Tue, 25 Jun 2013 10:36:41 +0100
>> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> > 
>> >> >>> On 25.06.13 at 02:01, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >> > + * in save_segments(), current has been updated to next, and no
>> >> > longer pointing
>> >> > + * to the PVH.
>> >> > + */
>> >> 
>> >> This is bogus - you shouldn't need any of the {save,load}_segment()
>> >> machinery for PVH, and hence this is not a valid reason for adding
>> >> a vcpu parameter here.
>> > 
>> > Ok, lets revisit this again since it's been few months already: 
>> > 
>> > read_segment_register() is called from few places for PVH, and for
>> > PVH it needs to read the value from regs. So it needs to be
>> > modified to check for PVH. Originally, I had started with checking
>> > for is_pvh_vcpu(current), but that failed quickly because of the
>> > context switch call chain:
>> > 
>> > __context_switch -> ctxt_switch_from --> save_segments ->
>> > read_segment_register
>> > 
>> > In this path, going from PV to PVH, the intention is to save
>> > segments for PV, and since current has already been updated to
>> > point to PVH, the check for current is not correct. Hence, the need
>> > for vcpu parameter. I will enhance my comments in the macro prolog
>> > in the next patch version.
>> No. I already said that {save,load}_segments() ought to be
>> skipped for PVH, as what it does is already done by VMRESUME/
>> #VMEXIT. And the function is being passed a vCPU pointer, so
>> simply making the single call to save_segments() conditional on
>> is_pv_vcpu(), and converting the !is_hvm_vcpu() around the
>> call to load_LDT() and load_segments() to is_pv_vcpu() (provided
>> the LDT handling isn't needed on the same basis) should eliminate
>> that need.
> They are *not* being called for PVH, where do you see that? Are you 
> looking at the right patches? They are called for PV. Again, going from 
> PV to PVH in context switch, current will be pointing to PVH and not 
> PV when save_segments calls the macro to save segments for PV (not PVH).
> Hence, the VCPU is passed to save_segments, and we need to pass to our
> famed macro above!

But I'm not arguing that the vCPU pointer shouldn't be passed to
this - I'm trying to tell you that having this macro read the selector
values from struct cpu_user_regs in the PVH case is wrong. It was
you continuing to point to the context switch path, making me
believe that so far you don't properly suppress the uses of
{save,load}_segments() for PVH.

>> Furthermore - the reading from struct cpu_user_regs continues
>> to be bogus (read: at least a latent bug) as long as you don't
>> always save the selector registers, which you now validly don't
>> do anymore. 
> Right,  because you made me move it to the path that calls the macro.
> So, for the path that the macro is called, the selectors would have
> been read. So, whats the latent bug?

The problem is that you think that now and forever this macro
will only be used from the MMIO emulation path (or some such, in
any case - just from one very specific path). This is an assumption
you may make while in an early development phase, but not in
patches that you intend to be committed: Someone adding another
use of the macro is _very_ unlikely to go and check what contraints
apply to that use. The macro has to work in the general case.

>>You should be consulting the VMCS instead, i.e. go
>> through hvm_get_segment_register().
>> Whether a more lightweight variant reading just the selector is
>> on order I can't immediately tell.
> Well, that would require v == current always, that is not guaranteed
> in the macro path. What exactly is the problem the way it is?

I think you need to view this slightly differently: At present, the
macro reads the live register values. Which means even if
v != current, we're still in a state where the hardware has the
correct values. This ought to apply in exactly the same way to
PVH - the current VMCS should still be holding the right values.

Furthermore, the case is relatively easy to determine: Instead of
only looking at current, you could also take per_cpu(curr_vcpu, )
into account.

And finally - vmx_get_segment_register() already takes a vCPU
pointer, and uses vmx_vmcs_{enter,exit}(), so there's no
dependency of that function to run in the context of the subject
vCPU. vmx_vmcs_enter() itself checks whether it's running on
the subject vCPU though, so that may need inspection/tweaking
if the context switch path would really ever get you into that
macro (I doubt that it will though, and making assumptions about
the context switch path [not] doing certain things _is_ valid, as
opposed to making such assumptions on arbitrary code).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.