[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 4/5] xen, libxc: Request page fault injection via libxc



> From: Razvan Cojocaru [mailto:rcojocaru@xxxxxxxxxxxxxxx]
> Sent: Thursday, September 04, 2014 1:02 AM
> 
> On 09/04/2014 10:54 AM, Razvan Cojocaru wrote:
> > On 09/04/2014 04:25 AM, Tian, Kevin wrote:
> >>> From: Razvan Cojocaru [mailto:rcojocaru@xxxxxxxxxxxxxxx]
> >>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >>> index 5761ff9..5d3e4d4 100644
> >>> --- a/xen/arch/x86/hvm/hvm.c
> >>> +++ b/xen/arch/x86/hvm/hvm.c
> >>> @@ -420,6 +420,31 @@ static bool_t hvm_wait_for_io(struct
> >>> hvm_ioreq_vcpu *sv, ioreq_t *p)
> >>>      return 1;
> >>>  }
> >>>
> >>> +static bool_t hvm_is_pf_requested(struct vcpu *v)
> >>> +{
> >>> +    const struct domain *d = v->domain;
> >>> +    struct segment_register seg;
> >>> +    uint64_t mask;
> >>> +
> >>> +    hvm_get_segment_register(v, x86_seg_ss, &seg);
> >>> +
> >>> +    if ( seg.attr.fields.dpl != 3 ) /* Guest is not in user mode */
> >>> +        return 0;
> >>> +
> >>> +    if ( hvm_long_mode_enabled(v) )
> >>> +        mask = PADDR_MASK & PAGE_MASK; /* Bits 51:12. */
> >>> +    else if ( hvm_pae_enabled(v) )
> >>> +        mask = 0x00000000ffffffe0; /* Bits 31:5. */
> >>> +    else
> >>> +        mask = (uint32_t)PAGE_MASK; /* Bits 31:12. */
> >>> +
> >>> +    if ( (v->arch.hvm_vcpu.guest_cr[3] & mask) !=
> >>> +         (d->arch.hvm_domain.inject_trap.cr3 & mask) )
> >>> +        return 0;
> >>> +
> >>> +    return 1;
> >>> +}
> >>> +
> >>>  void hvm_do_resume(struct vcpu *v)
> >>>  {
> >>>      struct domain *d = v->domain;
> >>> @@ -451,6 +476,15 @@ void hvm_do_resume(struct vcpu *v)
> >>>      }
> >>>
> >>>      /* Inject pending hw/sw trap */
> >>
> >> you want to make comment more introspection related.
> >>
> >>> +    if ( d->arch.hvm_domain.inject_trap.vector != -1 &&
> >>> +         v->arch.hvm_vcpu.inject_trap.vector == -1 &&
> >>> +         hvm_is_pf_requested(v) )
> >>> +    {
> >>> +        hvm_inject_trap(&d->arch.hvm_domain.inject_trap);
> >>> +        d->arch.hvm_domain.inject_trap.vector = -1;
> >>> +    }
> >>> +
> >>> +    /* Inject pending hw/sw trap */
> >>>      if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
> >>>      {
> >>>          hvm_inject_trap(&v->arch.hvm_vcpu.inject_trap);
> >>> @@ -1473,9 +1507,10 @@ int hvm_domain_initialise(struct domain *d)
> >>>              printk(XENLOG_G_INFO "PVH guest must have HAP
> on\n");
> >>>              return -EINVAL;
> >>>          }
> >>> -
> >>>      }
> >>>
> >>> +    d->arch.hvm_domain.inject_trap.vector = -1;
> >>> +
> >>
> >> A question here. With new design, now you have two places which may
> cache
> >> fault injection information. If unfortunately two consecutive hypercalls 
> >> with
> >> one having vcpuid=-1 and the other having vcpuid=n, it's possible two
> injections
> >> will be handled together if same vcpu is chosen for two pending injections.
> >>
> >> I think the hypercall needs a check of all previous pending requests, not
> only
> >> in domain specific structure as what you did in this version.
> 
> If two consecutive hypercalls with a wildcard, and then a specific
> vcpuid take place, the per-domain injection data will indeed be set
> along with the VCPU-specific injection data.

I'm thinking one hypercall with a wildchard, while another hypercall
with a specific vcpuid.

> 
> However, the above code gives the per-VCPU injection data precedence:
> 
> +    if ( d->arch.hvm_domain.inject_trap.vector != -1 &&
> +         v->arch.hvm_vcpu.inject_trap.vector == -1 &&
> +         hvm_is_pf_requested(v) )
> 
> so if v->arch.hvm_vcpu.inject_trap.vector == -1, it will not try to
> inject (it will only inject at the next hvm_do_resume(), if it's
> possible then). In other words, the per-domain data will be held on to
> until such time it becomes possible to inject the trap (by any available
> matching VCPU).

If vcpu injection data is always preferred, then you don't have a consistent
policy whether the old or new injection is favored. In my example above,
regardless of whether hypercall with a specific vcpuid comes first, it's always
favored over the wildcard based injection. 

If this is what your design wants, better to clarify that. But actually in your
hypercall code, you do check whether a pending wildcard-based injection
is already there. It looks to me incomplete.


> 
> It's difficult to check for this case beforehand, because at the time of
> the per-domain hypercall we don't know which VCPU will end up having to
> do the work of injecting the trap.

If the policy is simply new injection needs get EBUSY as long as a valid
earlier injection request is pending, then you need scan both domain
structure and all vcpu structures. Currently you only do that for domain
structure. I just want to see a consistent policy. :-)

> 
> 
> Thanks,
> Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.