[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/AMD: Add support for AMD's OSVW feature in guests



On 31/01/2012 10:36, "Christoph Egger" <Christoph.Egger@xxxxxxx> wrote:

> On 01/31/12 11:13, Jan Beulich wrote:
>>> --- a/xen/arch/x86/platform_hypercall.c Thu Jan 26 17:43:31 2012 +0000
>>> +++ b/xen/arch/x86/platform_hypercall.c Mon Jan 30 18:00:50 2012 +0100
>>> @@ -166,7 +166,17 @@ ret_t do_platform_op(XEN_GUEST_HANDLE(xe
>>>               break;
>>> 
>>>           guest_from_compat_handle(data, op->u.microcode.data);
>>> +
>>> +        while ( !domctl_lock_acquire() )
>> 
>> It's not clear to me what you're trying to protect against here. I don't
>> think this prevents guests from making progress (and hence possibly
>> seeing the intermediate [and wrong] OSVW state between
>> svm_host_osvw_reset() and svm_host_osvw_init()).
> 
> The reason is because when we allocated VCPUs (via XEN_DOMCTL_max_vcpus
> and possibly other controls) we call svm_vcpu_initialise() ->
> svm_guest_osvw_init(). If we are in the middle of microcode update then
> osvw_length/osvw_status may be in an inconsistent state.

Deserves a code comment. Even better, deserves its own synchronisation
(e.g., your own specific lock for this purpose). It would be nice to avoid
further intricate dependencies on the big domctl lock, so we can consider
removing it in future.

 -- Keir

> Christoph
> 
>> Jan
>> 
>>> +            if ( hypercall_preempt_check() )
>>> +            {
>>> +                ret = hypercall_create_continuation(
>>> +                    __HYPERVISOR_platform_op, "h", u_xenpf_op);
>>> +                goto out;
>>> +            }
>>> +
>>>           ret = microcode_update(data, op->u.microcode.length);
>>> +        domctl_lock_release();
>>>       }
>>>       break;
>>> 
>> 
>> 
>> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.