[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Thoughts on current Xen EDAC/MCE situation



On Wed, Jan 24, 2024 at 08:23:15AM +0100, Jan Beulich wrote:
> On 23.01.2024 23:52, Elliott Mitchell wrote:
> > On Tue, Jan 23, 2024 at 11:44:03AM +0100, Jan Beulich wrote:
> >> On 22.01.2024 21:53, Elliott Mitchell wrote:
> >>
> >>> I find the present handling of MCE in Xen an odd choice.  Having Xen do
> >>> most of the handling of MCE events is a behavior matching a traditional
> >>> stand-alone hypervisor.  Yet Xen was originally pushing any task not
> >>> requiring hypervisor action onto Domain 0.
> >>
> >> Not exactly. Xen in particular deals with all of CPU and all of memory.
> >> Dom0 may be unaware of the full amount of CPUs in the system, nor the
> >> full memory map (without resorting to interfaces specifically making
> >> that information available, but not to be used for Dom0 kernel's own
> >> acting as a kernel).
> > 
> > Why would this be an issue?
> 
> Well, counter question: For all of ...
> 
> > I would expect the handling to be roughly:  NMI -> Xen; Xen schedules a
> > Dom0 vCPU which is eligible to run on the pCPU onto the pCPU; Dom0
> > examines registers/MSRs, Dom0 then issues a hypercall to Xen telling
> > Xen how to resolve the issue (no action, fix memory contents, kill page).
> > 
> > Ideally there would be an idle Dom0 vCPU, but interrupting a busy vCPU
> > would be viable.  It would even be reasonable to ignore affinity and
> > grab any Dom0 vCPU.
> > 
> > Dom0 has 2 purposes for the address.  First, to pass it back to Xen.
> > Second, to report it to a system administrator so they could restart the
> > system with that address marked as bad.  Dom0 wouldn't care whether the
> > address was directly accessible to it or not.
> > 
> > The proposed hypercall should report back what was effected by a UE
> > event.  A given site might have a policy that if $some_domain is hit by a
> > UE, everything is restarted.  Meanwhile Dom0 or Xen being the winner
> > could deserve urgent action.
> 
> ... this, did you first look at code and figure how what you suggest
> could be seamlessly integrated? Part of your suggestion (if I got it
> right) is, after all, to make maintenance on the Dom0 kernel side easy.
> I expect such adjustments being not overly intrusive would also be an
> acceptance criteria by the maintainers.

Maintenance on the Dom0 kernel isn't the issue.

One issue is for reporting of MCEs when running on Xen to be consistent
with MCE when not running on Xen.  Notably similar level of information
and ideally tools which assist with analyzing failures working when
running on Xen.

Another issue is to do a better job of keeping Xen up to date with MCE
handling as new hardware with new MCE implementations show up.

> Second - since you specifically talk about UE: The more code is involved
> in handling, the higher the chance of the #MC ending up fatal to the
> system.

Indeed.  Yet right now I'm more concerned over whether MCEs reporting is
happening at all.  There aren't very many messages at all.

> Third, as to Dom0's purposes of having the address: If all it is to use
> it for is to pass it back to Xen, paths in the respective drivers will
> necessarily be entirely different for the Xen vs the native cases.

I'm less than certain of the best place for Xen to intercept MCE events.
For UE memory events, the simplest approach on Linux might be to wrap the
memory_failure() function.  Yet for Linux/x86,
mce_register_decode_chain() also looks like a very good candidate.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@xxxxxxx  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.