[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [RFC] RAS(Part II)--MCA enalbing in XEN



Christoph Egger wrote:
On Wednesday 25 February 2009 03:25:12 Jiang, Yunhong wrote:

So, Frank/Egger, can I assume followed are consensus currently?

1) MCE is handled by Xen HV totally, while guest's vMCE handler will only
works for itself.
2) Xen present a virtual #MC to guest through MSR access emulation.(Xen will do the translation if needed). 3) Guest's unmodified MCE handler will handle the vMCE injected.
4) Dom0 will get all log/telemetry through hypercall.
5) The action taken by xen will be passed to dom0 through the telemetry
mechanism.

Mostly. Regarding 2) I want like to discuss first how to handle errors
impacting multiple contiguous physical pages which are non-contigous
in guest physical space.

And I also want to discuss about how to do recovery actions requiring
PCI access. One example for this is
Shanghai's "L3 Cache Index Disable"-Feature.
Xen delegates PCI config space to Dom0 and
via PCI passthrough partly to DomU.
That means, if registers in PCI config space are independently
accessable by Xen, Dom0 and/or DomU, they can interfere with each other.
Therefore, we need to
a) clearly define who handles what and
b) define some rules based on a)
c) discuss how to handle Dom0/DomU going wild
    and break the rules defined in b)

I also agree on the approach in principle, but would like to see these points addressed. For non-contiguous pages, I suppose Xen could deliver multiple #vMCEs to the guest, split into contiguous parts. The vmce code seems to be set up to be able to do this.

As for the Shanghai feature: Christoph, are there any documents available on that feature? What kind of errors are delivered (corrected/correctable)?

- Frank

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.