[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 COLOPre 02/13] tools/libxc: support to resume uncooperative HVM guests



On 06/11/2015 04:44 PM, Ian Campbell wrote:
> On Thu, 2015-06-11 at 10:42 +0800, Wen Congyang wrote:
>> On 06/10/2015 11:18 PM, Ian Campbell wrote:
>>> On Mon, 2015-06-08 at 11:43 +0800, Yang Hongyang wrote:
>>>> From: Wen Congyang <wency@xxxxxxxxxxxxxx>
>>>>
>>>> For PVHVM, the hypercall return code is 0, and it can be resumed
>>>> in a new domain context.
>>>> we suspend PVHVM and resume it is like this:
>>>> 1. suspend it via evtchn
>>>> 2. modifty the return code to 1
>>>> 3. the guest know that the suspend is cancelled, we will use fast path
>>>>    to resume it.
>>>>
>>>> Under COLO, we will update the guest's state(modify memory, cpu's 
>>>> registers,
>>>> device status...). In this case, we cannot use the fast path to resume it.
>>>> Keep the return code 0, and use a slow path to resume the guest. We have
>>>> updated the guest state, so we call it a new domain context.
>>>>
>>>> For HVM, the hypercall is a NOP.
>>>
>>> This doesn't match my reading of domain_resume on the Xen side, which is
>>> the ultimate effect of this hypercall. It seems to unpause the domain
>>> (and all vcpus) regardless of the domain type, including PVHVM vs HVM
>>> (which isn't something Xen is generally aware of anyway).
>>>
>>> I also can't really follow the stuff about PVHVM vs HVM vs uncooperative
>>> guests, and I certainly can't see where the PVHVM vs HVM distinction is
>>> made in this patch.
>>
>> Sorry for my mistake. I read the codes again:
>>
>> 1. suspend
>> a. PVHVM and PV: we use the same way to suspend the guest(send the suspend
>>    request to the guest)
>> b. pure HVM: we call xc_domain_shutdown(..., SHUTDOWN_suspend) to suspend
>>    the guest
>> c. ???: suspending the guest via XenBus control node
> 
> AFAIK c is another option under a, it depends on whether the guest
> supports evtchn or not, if not then the xenstore variant will be used.

I remember it now. IIRC, the behavior in the guest are the same. Is it right?

Thanks
Wen Congyang

> 
>> I don't know we will goto c in which case.
>>
>> 2. Resume:
>> a. fast path
>>    In this case, we don't change the guest's state.
>>    PV: modify the return code to 1, and than call the domctl: 
>> XEN_DOMCTL_resumedomain
>>    PVHVM: same with PV
>>    HVM: do nothing in modify_returncode, and than call the domctl: 
>> XEN_DOMCTL_resumedomain
>> b. slow
>>    In this case, we have changed the guest's state.
>>    PV: update start info, and reset all secondary CPU states. Than call the
>>    domctl: XEN_DOMCTL_resumedomain
>>    PVHVM and HVM can not be resumed.
>>
>> For PVHVM, in my test, only call the domctl: XEN_DOMCTL_resumedomain
>> can work. I am not sure if we should update start info and reset all 
>> secondary CPU
>> states.
>>
>> For pure HVM guest, in my test, only call the domctl: 
>> XEN_DOMCTL_resumedomain can
>> work.
>>
>> So we can call libxl__domain_resume(..., 1) if we don't change the guest 
>> state, otherwise
>> call libxl__domain_resume(..., 0).
>>
>> Any suggestion is welcomed.
>>
>> Thanks
>> Wen Congyang
>>
>>
>>>
>>> Ian.
>>>
>>>
>>>>
>>>> Signed-off-by: Wen Congyang <wency@xxxxxxxxxxxxxx>
>>>> Signed-off-by: Yang Hongyang <yanghy@xxxxxxxxxxxxxx>
>>>> ---
>>>>  tools/libxc/xc_resume.c | 22 ++++++++++++++++++----
>>>>  1 file changed, 18 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/tools/libxc/xc_resume.c b/tools/libxc/xc_resume.c
>>>> index e67bebd..bd82334 100644
>>>> --- a/tools/libxc/xc_resume.c
>>>> +++ b/tools/libxc/xc_resume.c
>>>> @@ -109,6 +109,23 @@ static int xc_domain_resume_cooperative(xc_interface 
>>>> *xch, uint32_t domid)
>>>>      return do_domctl(xch, &domctl);
>>>>  }
>>>>  
>>>> +static int xc_domain_resume_hvm(xc_interface *xch, uint32_t domid)
>>>> +{
>>>> +    DECLARE_DOMCTL;
>>>> +
>>>> +    /*
>>>> +     * If it is PVHVM, the hypercall return code is 0, because this
>>>> +     * is not a fast path resume, we do not modify_returncode as in
>>>> +     * xc_domain_resume_cooperative.
>>>> +     * (resuming it in a new domain context)
>>>> +     *
>>>> +     * If it is a HVM, the hypercall is a NOP.
>>>> +     */
>>>> +    domctl.cmd = XEN_DOMCTL_resumedomain;
>>>> +    domctl.domain = domid;
>>>> +    return do_domctl(xch, &domctl);
>>>> +}
>>>> +
>>>>  static int xc_domain_resume_any(xc_interface *xch, uint32_t domid)
>>>>  {
>>>>      DECLARE_DOMCTL;
>>>> @@ -138,10 +155,7 @@ static int xc_domain_resume_any(xc_interface *xch, 
>>>> uint32_t domid)
>>>>       */
>>>>  #if defined(__i386__) || defined(__x86_64__)
>>>>      if ( info.hvm )
>>>> -    {
>>>> -        ERROR("Cannot resume uncooperative HVM guests");
>>>> -        return rc;
>>>> -    }
>>>> +        return xc_domain_resume_hvm(xch, domid);
>>>>  
>>>>      if ( xc_domain_get_guest_width(xch, domid, &dinfo->guest_width) != 0 )
>>>>      {
>>>
>>>
>>> .
>>>
>>
> 
> 
> .
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.