[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: allow NMI injection

  • To: Jan Beulich <jbeulich@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxxxxxxxx>
  • Date: Wed, 28 Feb 2007 11:59:58 +0000
  • Delivery-date: Wed, 28 Feb 2007 03:59:23 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcdbL/epNi3bbMcjEduA9AAX8io7RQ==
  • Thread-topic: [Xen-devel] [PATCH] x86: allow NMI injection

On 28/2/07 11:39, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

> NetWare's internal debugger needs the ability to send NMI IPIs, and
> there is no reason to not allow domUs or dom0's vCPUs other than vCPU 0
> to handle NMIs (they just will never see hardware generated ones).
> While currently not having a frontend, the added hypercall also allows
> for being used to inject NMIs into foreign VMs.
> Along the lines, this fixes a potential race condition caused by
> previously accessing the VCPU flags field non-atomically in entry.S.
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>

Notably this patch changes the way that a NMI handler is registered to use
the native-like vector 2. This changes the guest interface though. Do you
really need to be able to specify a custom CS? Can you not vector to the
flat CS and then far jump?

I'm not sure about making the IPI function a physdev_op(), since this is
still a virtual NMI (it has nothing to do with real hardware NMIs). It might
be better to make it a vcpu_op. Then it would not be a great fit to allow
send-to-all send-to-all-butself overrides, but I'm not sure how important
that optimisation is (or is to make the NMI deliveries as simultaneous as

 -- Keir

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.