[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 5/7] x86/xen: nopv parameter support for HVM guest




On 2019/6/25 20:31, Juergen Gross wrote:
On 24.06.19 14:02, Zhenzhong Duan wrote:
PVH guest needs PV extentions to work, so nopv parameter is ignored
for PVH but not for HVM guest.

In order for nopv parameter to take effect for HVM guest, we need to
distinguish between PVH and HVM guest early in hypervisor detection
code. By moving the detection of PVH in xen_platform_hvm(),
xen_pvh_domain() could be used for that purpose.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@xxxxxxxxxx>
Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
Cc: Juergen Gross <jgross@xxxxxxxx>
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
---
  arch/x86/xen/enlighten_hvm.c | 18 ++++++++++++------
  1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 7fcb4ea..26939e7 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -25,6 +25,7 @@
  #include "mmu.h"
  #include "smp.h"
  +extern bool nopv;
  static unsigned long shared_info_pfn;
    void xen_hvm_init_shared_info(void)
@@ -226,20 +227,24 @@ static uint32_t __init xen_platform_hvm(void)
      if (xen_pv_domain())
          return 0;
  +#ifdef CONFIG_XEN_PVH
+    /* Test for PVH domain (PVH boot path taken overrides ACPI flags). */
+    if (!x86_platform.legacy.rtc && x86_platform.legacy.no_vga)
+        xen_pvh = true;

Sorry, this won't work, as ACPI tables are scanned only some time later.
Hmm, right. Thanks for point out.

You can test for xen_pvh being true here (for the case where the guest
has been booted via the Xen-PVH boot entry) and handle that case, but
the case of a PVH guest started via the normal boot entry (like via
grub2) and nopv specified is difficult. The only idea I have right now
would be to use another struct hypervisor_x86 for that case which will
only be used for Xen HVM/PVH _and_ nopv specified. It should be a copy
of the bare metal variant, but a special guest_late_init member issuing
a big fat warning in case PVH is being detected.

After that warning, I guess PVH will run into hang finally? If it's true, BUG() is better?

Adding another hypervisor_x86 is a bit redundant, I think of below change.

I'll test it tomorrow. But appreciate your suggestion whether it's feasible. Thanks

--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -25,6 +25,7 @@
 #include "mmu.h"
 #include "smp.h"

+extern bool nopv;
 static unsigned long shared_info_pfn;

 void xen_hvm_init_shared_info(void)
@@ -221,11 +222,37 @@ bool __init xen_hvm_need_lapic(void)
        return true;
 }

+static __init void xen_hvm_nopv_guest_late_init(void)
+{
+#ifdef CONFIG_XEN_PVH
+       if (x86_platform.legacy.rtc || !x86_platform.legacy.no_vga)
+               return;
+
+       /* PVH detected. */
+       xen_pvh = true;
+
+       printk(KERN_CRIT "nopv parameter isn't supported in PVH guest\n");
+       BUG();
+#endif
+}
+
+
 static uint32_t __init xen_platform_hvm(void)
 {
        if (xen_pv_domain())
                return 0;

+       if (xen_pvh_domain() && nopv)
+       {
+       /* guest booting via the Xen-PVH boot entry goes here */
+               printk(KERN_INFO "nopv parameter is ignored in PVH guest\n");
+       }
+       else if (nopv)
+       {
+       /* guest booting via normal boot entry (like via grub2) goes here */
+               x86_init.hyper.guest_late_init = xen_hvm_nopv_guest_late_init;
+               return 0;
+       }
        return xen_cpuid_base();
 }

@@ -258,4 +285,5 @@ static __init void xen_hvm_guest_late_init(void)
        .init.init_mem_mapping  = xen_hvm_init_mem_mapping,
        .init.guest_late_init   = xen_hvm_guest_late_init,
        .runtime.pin_vcpu       = xen_pin_vcpu,
+       .ignore_nopv            = true,
 };


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.