[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] domain restore operation ordering



>>> On 26.03.14 at 11:19, <Ian.Campbell@xxxxxxxxxxxxx> wrote:
> On Wed, 2014-03-26 at 10:00 +0000, Jan Beulich wrote:
>> Hi,
>> 
>> looking at some restore log with HVM debug options enabled in the
>> hypervisor I notice
>> 
>> (XEN) HVM2 restore: MTRR 0
>> (XEN) [HVM:0.0] <mtrr_var_range_msr_set> invalid msr content:fff8000800
>> (XEN) 
>> (XEN) [HVM:0.0] <mtrr_var_range_msr_set> invalid msr content:fffc000800
>> 
>> and looking into the reasons for that I think both xend and xl apply
>> the CPUID policy and overrides only _after_ having processed the
>> restore image. Yet mtrr_var_range_msr_set() uses domain_cpuid()
>> to determine the number of physical address bits in order to validate
>> the register contents.
>> 
>> Is there any reason why the ordering needs to be the way it
>> currently is?
> 
> I don't know, my guess is it just ended up that way for no particular
> reason.
> 
> I'd be inclined to just try moving the libxl_cpuid_apply_policy call
> from libxl__build_post to libxl__build_pre and see what breaks.
> 
> (it seems to me that libxl__arch_domain_create would be the correct
> build_pre location, removing the need for libxl_nocpuid.c I suspect)
> 
>>  If so, we may need to tweak mtrr_var_range_msr_set()
>> to special case restoring (albeit I can't think of a way to tell this,
>> perhaps apart from d != current->domain, which wouldn't really be
>> a restore specific check).

Actually I just found another reason why we need to do this change
in the hypervisor (and trust that the controlling domain provides a
consistent set of CPUID and MTRR values; this doesn't introduce a
security issue as only the controlled guest would be affected if they
weren't consistent).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.