[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v2] x86/hvm: finish IOREQ correctly on completion path



Since the introduction of linear_{read,write}() helpers in 3bdec530a5
(x86/HVM: split page straddling emulated accesses in more cases) the
completion path for IOREQs has been broken: if there is an IOREQ in
progress but hvm_copy_{to,from}_guest_linear() returns HVMTRANS_okay
(e.g. when P2M type of source/destination has been changed by IOREQ
handler) the execution will never re-enter hvmemul_do_io() where
IOREQs are completed. This usually results in a domain crash upon
the execution of the next IOREQ entering hvmemul_do_io() and finding
the remnants of the previous IOREQ in the state machine.

This particular issue has been discovered in relation to p2m_ioreq_server
type where an emulator changed the memory type between p2m_ioreq_server
and p2m_ram_rw in process of responding to IOREQ which made hvm_copy_..()
to behave differently on the way back. But could be also applied
to a case where e.g. an emulator balloons memory to/from the guest in
response to MMIO read/write, etc.

Fix it by checking if IOREQ completion is required before trying to
finish a memory access immediately through hvm_copy_..(), re-enter
hvmemul_do_io() otherwise.

Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
---
Changes in v2:
 * use test for MMIO cache entry to identify preceding MMIO access
 * change the comment message to be more descriptive as suggested
---
 xen/arch/x86/hvm/emulate.c | 35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 754baf6..365fdc9 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -952,7 +952,7 @@ static int hvmemul_phys_mmio_access(
  * cache indexed by linear MMIO address.
  */
 static struct hvm_mmio_cache *hvmemul_find_mmio_cache(
-    struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir)
+    struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir, bool create)
 {
     unsigned int i;
     struct hvm_mmio_cache *cache;
@@ -966,6 +966,9 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cache(
             return cache;
     }
 
+    if ( !create )
+        return NULL;
+
     i = vio->mmio_cache_count;
     if( i == ARRAY_SIZE(vio->mmio_cache) )
         return NULL;
@@ -1000,7 +1003,7 @@ static int hvmemul_linear_mmio_access(
 {
     struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     unsigned long offset = gla & ~PAGE_MASK;
-    struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, gla, dir);
+    struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, gla, dir, 
true);
     unsigned int chunk, buffer_offset = 0;
     paddr_t gpa;
     unsigned long one_rep = 1;
@@ -1089,7 +1092,19 @@ static int linear_read(unsigned long addr, unsigned int 
bytes, void *p_data,
                        uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     pagefault_info_t pfinfo;
-    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
+    struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, addr,
+                                                           IOREQ_READ, false);
+    int rc = HVMTRANS_bad_gfn_to_mfn;
+
+    /*
+     * If there is an MMIO cache entry for that access then we must be 
re-issuing
+     * an access that was previously handed as MMIO. Thus it is imperative that
+     * we handle this access in the same way to guarantee completion and hence
+     * clean up any interim state.
+     */
+    if ( !cache )
+        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
 
     switch ( rc )
     {
@@ -1132,7 +1147,19 @@ static int linear_write(unsigned long addr, unsigned int 
bytes, void *p_data,
                         uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     pagefault_info_t pfinfo;
-    int rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
+    struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, addr,
+                                                           IOREQ_WRITE, false);
+    int rc = HVMTRANS_bad_gfn_to_mfn;
+
+    /*
+     * If there is an MMIO cache entry for that access then we must be 
re-issuing
+     * an access that was previously handed as MMIO. Thus it is imperative that
+     * we handle this access in the same way to guarantee completion and hence
+     * clean up any interim state.
+     */
+    if ( !cache )
+        rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
 
     switch ( rc )
     {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.