[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] MCE/EDAC Status/Updating?


  • To: Elliott Mitchell <ehem+xen@xxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 15 Feb 2019 18:42:22 +0000
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 15 Feb 2019 18:42:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 15/02/2019 18:20, Elliott Mitchell wrote:
> On Fri, Feb 15, 2019 at 03:58:49AM -0700, Jan Beulich wrote:
>>>>> On 15.02.19 at 05:23, <ehem+xen@xxxxxxx> wrote:
>>> The MCE/EDAC support code appears to be in rather poor shape.
>>>
>>> The AMD code mentions Family 10h, which might have been available 10
>>> years ago.  They've likely been findable used with difficulty more
>>> recently, but no hardware made in the past 5 years matches this profile.
>> Well, Fam10 is mentioned explicitly, but as per the use of e.g.
>> mcheck_amd_famXX newer ones are supported by this code
>> as well.
> In that case sometime between Xen 4.1 and Xen 4.4, the AMD MCE/EDAC code
> was completely broken and hasn't been fixed.

I take it you've got a usecase which now doesn't work?

>>> Given the recent trends in Xen's development I'd tend to suggest going a
>>> different direction from the existing code.  The existing code was
>>> attempting to handle MCE/EDAC errors by emulating them and passing them
>>> to the effected domain.  Instead of this approach, let Domain 0 handle
>>> talking to MCE/EDAC hardware and merely have Xen decode addresses.
>>>
>>> If errors/warnings are occuring, you need those reports centralized,
>>> which points to handling them in Domain 0.  If an uncorrectable error
>>> occurs, Domain 0 should choose whether to kill a given VM or panic the
>>> entire machine.  Either way, Domain 0 really needs to be alerted that
>>> hardware is misbehaving and may need to be replaced.
>> But the point of the virtualization is to allow guests to more or less
>> gracefully recover (at least as far as the theory of it goes), e.g. by
>> killing just a process, rather than getting blindly killed.
>>
>> As to panic-ing the entire machine - if that's necessary, Dom0 is
>> unlikely to be in the right position. There's way too high a chance for
>> further things to go wrong until the event has even just arrived in
>> Dom0, let alone it having taken a decision.
> I'll agree it does make sense to try sending a corrupted memory alert to
> the effected domain, rather than nuking the entire VM.  Alerting the
> owner of the hardware though should be higher priority as they will then
> know they need to schedule a downtime and replace the module.
>
>
>>> The other part is alerting Domain 0 is *far* more likely to get the
>>> correct type of attention.  A business owning a Domain U on a random
>>> machine, may run a kernel without MCE/EDAC support or could miss the
>>> entries in their system log, nor would they necessarily know the correct
>>> personel to contact about hardware failing.
>> Alerting Dom0 alongside the affected DomU may indeed be desirable,
>> but mainly for the purpose of logging, only as a last resort for the
>> purpose of killing a guest.
> I think alerting Dom0 should be rather higher priority than alerting
> DomUs.  A given DomU may see one correctable memory error per month,
> which might seem harmless until you find there are a hundred DomUs on
> that hardware and every one of them is seeing one error per month.
>
> The only real useful place to report correctable errors like that is to
> Dom0.  Meanwhile uncorrectable errors are likely better to send a PV
> message to the DomU.  Let QEMU turn it into something which looks like
> real hardware if needed.  Meanwhile Dom0 may have a more up to date
> driver for the hardware than Xen.

I don't think anyone can defend the current state of MCE
handling/reporting in Xen, and I would certainly like to see it improved.

However, its not a simple as "let dom0 handle everything".  Dom0 is just
a VM, like all other domains.  It cant access the MCE MSR banks, and
even if it could, it would have a pcpu vs vcpu problem when trying to
interpret the data.

Xen is the entity which needs to handle the #MC, and do first-pass
processing.  If we want to give it to dom0 for further processing, it
either has to be virtualised in an architectural manner, or passed via a
paravirt channel so dom0 definitely knows it is dealing with data in
different enumeration spaces.

I expect there are also some non-trivial ACPI interactions here, which
are also made complicated by the Xen/dom0
interface-turned-undoucmented-mess.

Another issue we should look into is if we are going to make
improvements here, how do we go about ensuring that we don't regress
behaviour again.  I have no experience in this area (other than
bugfixing the #MC handler until it appears to behave as it did before),
but surely there are some ways of testing?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.