[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [RFC] [PATCH 0/2] Some clean-up to MCA handling



Keir, can you please check-in this patch?

This patch has been acked by Egger before (" This patch is good" at 
http://lists.xensource.com/archives/html/xen-devel/2010-04/msg00990.html) . 
Originaly I want to re-submit after comments with the other patch, but since I 
decide to split the other patch , to make it more easy to review, can you 
please check-in this firslty?

Thanks
--jyh


-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jiang, Yunhong
Sent: Tuesday, April 20, 2010 4:06 PM
To: Christoph Egger; Frank Van Der Linden
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser
Subject: [Xen-devel] RE: [RFC] [PATCH 0/2] Some clean-up to MCA handling


>> It is not "easier" internalhandling. In fact, it makes no difference to
>> internal handling at all. The reason is: 1) In amd_f10.c, it will only
>> occupy 4 mc_msr,
>
>well, 4 in the generic handler plus 3 MSRs via mcinfo_extended which have
>been introduced in family10h.

So it means 16 -3 extra mc_msrs be wasted at least, for each mcinfo_extended :-)

>
>> while in Intel platform, it may occupy 32 mc_msr, that is
>> sure to cost extra space. The mc_info buffer is quite limited and can't be
>> expanded in run time, so reduce the size is quite important.
>> 2) sizeof(void *) is different in 64 hypervisor and 32 bit dom0. I'm not
>> sure if it is tested in compatibility mode, which might be confused.
>>
>> In fact, since we have mc_msrs included in mcinfo_extended already, the
>> caller can get the size of the buffer quite easy.
>>
>> Of course, if you *really* don't care the waste of size in AMD platform,
>> it's ok for me. After all, in intel platform, either there is no extended
>> information, or it will occupy all of them, so it really does not matter to
>> me. But the (void*) issue should be resolved, I suspect.
>
>Is it possible to change the internal infrastructure to deal with multiple
>mc_info's ? The user (Dom0) will keep to see just one because the switch
>happens underneath.

I think the size limitation happens on two side. Firstly, there is only 10 
mc_info reserved in urgent queue, and 20 in non-urgent queue, and that queue 
size is not adjusted dynamical; Secondly, each mc_info contains 768 uint64_t.

>
>What does not fit into one mc_info will be put into the next.
>
>In xen you will need some operations:
>lowlevel: allocate, free, switch, read, write
>highlevel: get and put
>
>The mce code itself should just use the highlevel operations and also just
>see one mc_info. The highlevel operations see as many mc_info as needed and
>use the lowlevel operations which work on mc_info directly.
>
>Does that make sense to you ?

I suspect this will cause much more complex. Also will this cause trouble to 
ABI also, since the mc_info is defined in public?

As I have no data for MCE/CMCI trigger model in run time, maybe we can postpone 
this changes, unless some one raise this issue? 
Attached is the new patch, which does not change the interface anymore.

--jyh

>
>> How about your option to the other patch?
>
>Still need to have a look at it.
>
>> Thanks
>> --jyh
>>
>> >Christoph
>> >
>> >
>> >--
>> >---to satisfy European Law for business letters:
>> >Advanced Micro Devices GmbH
>> >Karl-Hammerschmidt-Str. 34, 85609 Dornach b. Muenchen
>> >Geschaeftsfuehrer: Andrew Bowd, Thomas M. McCoy, Giuliano Meroni
>> >Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
>> >Registergericht Muenchen, HRB Nr. 43632
>
>
>
>--
>---to satisfy European Law for business letters:
>Advanced Micro Devices GmbH
>Karl-Hammerschmidt-Str. 34, 85609 Dornach b. Muenchen
>Geschaeftsfuehrer: Andrew Bowd, Thomas M. McCoy, Giuliano Meroni
>Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
>Registergericht Muenchen, HRB Nr. 43632

Attachment: mce_intel_gext.patch
Description: mce_intel_gext.patch

Attachment: ATT00001..txt
Description: ATT00001..txt

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.