[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/nhvm: properly clean up after failure to set up all vCPU-s



>>> On 21.02.13 at 12:44, Tim Deegan <tim@xxxxxxx> wrote:
> At 11:26 +0000 on 21 Feb (1361445983), Jan Beulich wrote:
>> Otherwise we may leak memory when setting up nHVM fails half way.
>> 
>> This implies that the individual destroy functions will have to remain
>> capable (in the VMX case they first need to be made so, following
>> 26486:7648ef657fe7 and 26489:83a3fa9c8434) of being called for a vCPU
>> that the corresponding init function was never run on.
>> 
>> Once at it, also clean up some inefficiencies in the corresponding
>> parameter validation code.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> v2: nVMX fixes required by 26486:7648ef657fe7 and 26489:83a3fa9c8434.
>> 
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3916,20 +3916,25 @@ long do_hvm_op(unsigned long op, XEN_GUE
>>                      rc = -EPERM;
>>                      break;
>>                  }
>> +                if ( !a.value )
>> +                    break;
> 
> Surely setting from 1 to 0 should either disable nested-hvm entirely
> (including calling nestedhvm_vcpu_destroy()) or fail.  Otherwise I think
> alternating 1 and 0 will cause nestedhvm_vcpu_initialise() to allocate
> fresh state every time (& leak the old state).

No, that's precisely not the case with this patch (but was before
in case of failure on other than the first vCPU).

Of course we can _change_ to the model of fully disabling nHVM
in that case, but honestly I don't see the point, and the code is
simpler without doing so.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.