|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2 0/4] Fix tools/xen-mceinj to handle >=4GB guest memory
The existing xen-mceinj can not inject MCE through MSR_MCI_ADDR to a
domain w/ more than 4GB memory, e.g. if domain 0 has more than 4GB
memory, the execution of the command
xen-mceinj -d 0 -t 0 -p 0x2721a900
will fail w/ a message "Failed to get pfn list ffffffff: Operation not
supported".
The cause is that the hypercall XEN_DOMCTL_getmemlist used by
xen-mceinj to translate the guest physical address (argument of '-p')
to the host machine address always fails if the domain has more than
4GB memory due to the mitigation of XSA-74.
This patchset fixes this problem by moving the translation from
xen-mceinj to the hypervisor, so that it is not necessary to use
XEN_DOMCTL_getmemlist.
The first two patches just fix serval code-style issues, while the
other two are the actual fix.
Changes from v1 to v2:
1. The correct trailing whitespaces are kept in patch 1 and 2.
2. Follow Xen code style in this version.
3. Fix several type and macro issues in patch 3.
4. Add a missing "put_gfn()" in patch 3.
5. Update the error messages in patch 3 to include more information.
6. Use parameterized domain id rather than a hardcoded 0 for several
functions in xen-mceinj.c.
7. Update the commit message of patch 4 to explicitly state that the
address translation is moved to the hypervisor.
Haozhong Zhang (4):
x86/mce: Fix code style
tools/xen-mceinj: Fix code style
x86/mce: Translate passed-in GPA to host machine address
tools/xen-mceinj: Pass in GPA when injecting through MSR_MCI_ADDR
tools/tests/mce-test/tools/xen-mceinj.c | 192 ++++++++------------------------
xen/arch/x86/cpu/mcheck/mce.c | 66 ++++++++---
xen/include/public/arch-x86/xen-mca.h | 33 +++---
3 files changed, 120 insertions(+), 171 deletions(-)
--
2.4.8
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |