[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] vMCE vs migration



>>> On 10.02.12 at 22:28, Olaf Hering <olaf@xxxxxxxxx> wrote:
> On Fri, Feb 10, Jan Beulich wrote:
> 
>> >>> On 09.02.12 at 19:02, Olaf Hering <olaf@xxxxxxxxx> wrote:
>> > On Mon, Jan 30, Jan Beulich wrote:
>> > 
>> >> Below/attached a draft patch (compile tested only), handling save/
>> >> restore of the bank count, but not allowing for a config setting to
>> >> specify its initial value (yet).
>> > 
>> > Does it take more than just applying this patch for src+dst host and
>> > migrate a hvm guest? I see no difference, the mce warning is still
>> > there.
>> 
>> No, it shouldn't require anything else. Could you add a printk() each
>> to vmce_{save,load}_vcpu_ctxt() printing what gets saved/restored
>> (and at once checking that they actually get executed? I was under
>> the impression that adding save records for HVM is a simple drop-in
>> exercise these days...
> 
> These functions are called for dom0, but not for domU. And as a result
> arch.nr_vmce_banks remains zero. I assume the guest needs to be
> initialized in some way as well, and that does not happen?

These functions should be called with Dom0 being current domain,
but the struct domain * argument should certainly be that of the
DomU being saved/restored.

Guest initialization happens in vmce_init_vcpu(), called from
vcpu_initialise() (irrespective of the kind of domain, i.e. equally for
PV and HVM).

I spotted another problem with the patch though - MCG_CAP reads
aren't reflecting the possibly non-host bank count. I'm in the process
of addressing this, but the whole MCG_* handling is bogus as being
per-domain instead of per-vCPU (and at least MCG_CAP lacking
save/restore too).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.