[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-4.1-testing] hvm/viridian: Ditch the extra assertions/warnings for non-viridian guests.



# HG changeset patch
# User Paul Durrant <paul.durrant@xxxxxxxxxx>
# Date 1323168680 0
# Node ID 89f30356a24435f6d86531ae6dd097248249df42
# Parent  005a7f0a2043eadb35701a7c22dc41be89d327e3
hvm/viridian: Ditch the extra assertions/warnings for non-viridian guests.

Consensus is they are over-aggressive.

Signed-off-by: Keir Fraser <keir@xxxxxxx>
Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
xen-unstable changeset:   24230:96bbdc894224
xen-unstable date:        Fri Nov 25 15:48:03 2011 +0000

Fix save/restore for HVM domains with viridian=1

xc_domain_save/restore currently pay no attention to
HVM_PARAM_VIRIDIAN which results in an HVM domain running a recent
version on Windows (post-Vista) locking up on a domain restore due to
EOIs (done via a viridian MSR write) being silently dropped.  This
patch adds an extra save entry for the viridian parameter and also
adds code in the viridian kernel module to catch attempted use of
viridian functionality when the HVM parameter has not been set.

Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
Committed-by: Keir Fraser <keir@xxxxxxx>
xen-unstable changeset:   24229:373bd877cac3
xen-unstable date:        Fri Nov 25 15:30:41 2011 +0000
---


diff -r 005a7f0a2043 -r 89f30356a244 tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c   Tue Dec 06 10:48:57 2011 +0000
+++ b/tools/libxc/xc_domain_restore.c   Tue Dec 06 10:51:20 2011 +0000
@@ -670,6 +670,7 @@
     uint64_t vm86_tss;
     uint64_t console_pfn;
     uint64_t acpi_ioport_location;
+    uint64_t viridian;
 } pagebuf_t;
 
 static int pagebuf_init(pagebuf_t* buf)
@@ -804,6 +805,16 @@
         }
         return pagebuf_get_one(xch, ctx, buf, fd, dom);
 
+    case XC_SAVE_ID_HVM_VIRIDIAN:
+        /* Skip padding 4 bytes then read the acpi ioport location. */
+        if ( RDEXACT(fd, &buf->viridian, sizeof(uint32_t)) ||
+             RDEXACT(fd, &buf->viridian, sizeof(uint64_t)) )
+        {
+            PERROR("error read the viridian flag");
+            return -1;
+        }
+        return pagebuf_get_one(xch, ctx, buf, fd, dom);
+
     default:
         if ( (count > MAX_BATCH_SIZE) || (count < 0) ) {
             ERROR("Max batch size exceeded (%d). Giving up.", count);
@@ -1353,6 +1364,9 @@
             fcntl(io_fd, F_SETFL, orig_io_fd_flags | O_NONBLOCK);
     }
 
+    if (pagebuf.viridian != 0)
+        xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1);
+
     if (pagebuf.acpi_ioport_location == 1) {
         DBGPRINTF("Use new firmware ioport from the checkpoint\n");
         xc_set_hvm_param(xch, dom, HVM_PARAM_ACPI_IOPORTS_LOCATION, 1);
diff -r 005a7f0a2043 -r 89f30356a244 tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c      Tue Dec 06 10:48:57 2011 +0000
+++ b/tools/libxc/xc_domain_save.c      Tue Dec 06 10:51:20 2011 +0000
@@ -1642,6 +1642,18 @@
             PERROR("Error when writing the firmware ioport version");
             goto out;
         }
+
+        chunk.id = XC_SAVE_ID_HVM_VIRIDIAN;
+        chunk.data = 0;
+        xc_get_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN,
+                         (unsigned long *)&chunk.data);
+
+        if ( (chunk.data != 0) &&
+             wrexact(io_fd, &chunk, sizeof(chunk)) )
+        {
+            PERROR("Error when writing the viridian flag");
+            goto out;
+        }
     }
 
     if ( !callbacks->checkpoint )
diff -r 005a7f0a2043 -r 89f30356a244 tools/libxc/xg_save_restore.h
--- a/tools/libxc/xg_save_restore.h     Tue Dec 06 10:48:57 2011 +0000
+++ b/tools/libxc/xg_save_restore.h     Tue Dec 06 10:51:20 2011 +0000
@@ -134,6 +134,7 @@
 #define XC_SAVE_ID_HVM_CONSOLE_PFN    -8 /* (HVM-only) */
 #define XC_SAVE_ID_LAST_CHECKPOINT    -9 /* Commit to restoring after 
completion of current iteration. */
 #define XC_SAVE_ID_HVM_ACPI_IOPORTS_LOCATION -10
+#define XC_SAVE_ID_HVM_VIRIDIAN       -11
 
 /*
 ** We process save/restore/migrate in batches of pages; the below

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.