|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] x86/hvm: finish IOREQ correctly on completion path
Since the introduction of linear_{read,write}() helpers in 3bdec530a5
(x86/HVM: split page straddling emulated accesses in more cases) the
completion path for IOREQs has been broken: if there is an IOREQ in
progress but hvm_copy_{to,from}_guest_linear() returns HVMTRANS_okay
(e.g. when P2M type of source/destination has been changed by IOREQ
handler) the execution will never re-enter hvmemul_do_io() where
IOREQs are completed. This usually results in a domain crash upon
the execution of the next IOREQ entering hvmemul_do_io() and finding
the remnants of the previous IOREQ in the state machine.
This particular issue has been discovered in relation to p2m_ioreq_server
type where an emulator changed the memory type between p2m_ioreq_server
and p2m_ram_rw in process of responding to IOREQ which made hvm_copy_..()
to behave differently on the way back. But could be also applied
to a case where e.g. an emulator balloons memory to/from the guest in
response to MMIO read/write, etc.
Fix it by checking if IOREQ completion is required before trying to
finish a memory access immediately through hvm_copy_..(), re-enter
hvmemul_do_io() otherwise.
Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
---
xen/arch/x86/hvm/emulate.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 41aac28..36f8fee 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1080,7 +1080,15 @@ static int linear_read(unsigned long addr, unsigned int
bytes, void *p_data,
uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
{
pagefault_info_t pfinfo;
- int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
+ const struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io;
+ int rc = HVMTRANS_bad_gfn_to_mfn;
+
+ /*
+ * If the memory access can be handled immediately - do it,
+ * otherwise re-enter ioreq completion path to properly consume it.
+ */
+ if ( !hvm_ioreq_needs_completion(&vio->io_req) )
+ rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
switch ( rc )
{
@@ -1123,7 +1131,15 @@ static int linear_write(unsigned long addr, unsigned int
bytes, void *p_data,
uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
{
pagefault_info_t pfinfo;
- int rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
+ const struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io;
+ int rc = HVMTRANS_bad_gfn_to_mfn;
+
+ /*
+ * If the memory access can be handled immediately - do it,
+ * otherwise re-enter ioreq completion path to properly consume it.
+ */
+ if ( !hvm_ioreq_needs_completion(&vio->io_req) )
+ rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
switch ( rc )
{
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |