[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 RFC 2/4] x86/PV: support data breakpoint extension registers



>>> On 27.03.14 at 17:23, <Ian.Campbell@xxxxxxxxxx> wrote:
> On Thu, 2014-03-27 at 16:03 +0000, Jan Beulich wrote:
>> >>> On 27.03.14 at 16:28, <Ian.Campbell@xxxxxxxxxx> wrote:
>> > Do the protocol docs in xg_save_restore need updating due to this
>> > change?
>> 
>> Not sure about that - this doesn't seem to talk about what's inside
>> the "bodies".
> 
> Is this extending the "VCPU context data"? or one of the XC_SAVE_ID_*
> things? I think it would be worth extending that to indicate the new
> "optional" data, where and what it is etc.

Yeah, I could probably add a note to the "extv" thing, even if it
doesn't really fit.

>> > How does N->N+1 migration work out?
>> 
>> Since the structure size grows, and that size is embedded in the
>> structure, input coming from N will be treated as not having any
>> MSRs.
> 
> This is the size of the "extv" block in the header phase?

No, the "size" member of struct xen_domctl_ext_vcpucontext.

>> >> @@ -2130,9 +2162,25 @@ int xc_domain_restore(xc_interface *xch,
>> >>              goto vcpu_ext_state_restore;
>> >>          memcpy(&domctl.u.ext_vcpucontext, vcpup, 128);
>> >>          vcpup += 128;
>> >> +        if ( domctl.u.ext_vcpucontext.msr_count )
>> >> +        {
>> >> +            size_t sz = domctl.u.ext_vcpucontext.msr_count * 
>> >> sizeof(*msrs);
>> >> +
>> >> +            msrs = xc_hypercall_buffer_alloc(xch, msrs, sz);
>> >> +            if ( !msrs )
>> >> +            {
>> >> +                PERROR("No memory for vcpu%d MSRs", i);
>> >> +                goto out;
>> >> +            }
>> >> +            memcpy(msrs, vcpup, sz);
>> > 
>> > This seems to be opencoding the hypercall bounce buffer stuff to some
>> > extent.
>> 
>> Just like in the xstate logic. Aiui there's no way currently for
>> do_domctl() to know that in some cases embedded handles need
>> following. Or at least I didn't spot it, so if there is a mechanism to
>> avoid this open coding, please point me at it.
> 
> I didn't mean that do_domctl could do the bouncing, but rather than this
> code could itself use the bounce buffer stuff to bounce from vcpup into
> msrs automatically, i.e. avoiding the open coded memcpy.
> 
> Something like:
>         if ( domctl.u.ext_vcpucontext.msr_count )
>         {
>               /*const?*/size_t sz = domctl.u.ext_vcpucontext.msr_count * 
> sizeof(*msrs);
>               DECLARE_HYPERCALL_BOUNCE(vcpup, sz, 
> XC_HYPERCALL_BUFFER_BOUNCE_IN);
>         
>               if ( xc_hypercall_bounce_pre(xch, vcpup) )
>                       badness
>         
>               domclt.cmd = ...;
>               domctl.domain = ...; etc
>               set_xen_guest_handle(domctl.u.ext_vcpucontext.msrs, vcpup);
>         
>               frc = do_domctl(xch, &domctl)
>         
>               xc_hypercall_boucne_post(xch, vcpup)
>         }
> 
> Or it might be clearer to use
>          DECLARE_NAMED_HYPERCALL_BOUNCE(msrs, vcpup, sz, ..._BOUNCE_IN)
> so you can use "msrs" for the remainder.
> 
> Lastly you can pass size as 0 in the declaration and use
> HYPERCALL_BOUNCE_SET_SIZE, if that works better.

I'll see if I can make this build (for testing I have to rely on AMD
anyway).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.