[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Clarifying PVH mode requirements



I run Xen 4.6 Dom0

        rpm -qa | egrep -i "kernel-default-4|xen-4"
                kernel-default-devel-4.4.0-8.1.g9f68b90.x86_64
                xen-4.6.0_08-405.1.x86_64

My guests are currently HVM in PVHVM mode; I'm exploring PVH.

IIUC, for 4.6, this doc

        http://xenbits.xen.org/docs/4.6-testing/misc/pvh-readme.txt

instructs the following necessary changes:

@ GRUBG cfg

-       GRUB_CMDLINE_XEN=" ..."
+       GRUB_CMDLINE_XEN=" dom0pvh ..."

&, @ guest.cfg

+       pvh = 1

For my guest.cfg, currently in PVHVM mode, I have

        builder = 'hvm'
        xen_platform_pci = 1
        device_model_version="qemu-xen"
        hap = 1
        ...

Q:
        Do any of these^^ params need to also change with the addition of

        pvh = 1


At the moment HAP is required for PVH.

As above, I've 'hap = 1' enabled.

But checking cpu,

        hwinfo --cpu | egrep "Arch|Model"
          Arch: X86-64
          Model: 6.60.3 "Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz"

neither 'hap' nor 'emt' are specifically called out,

        egrep -wo 'vmx|lm|aes' /proc/cpuinfo  | sort | uniq \
         | sed -e 's/aes/Hardware encryption=Yes (&)/g' \
-e 's/lm/64 bit cpu=Yes (&)/g' -e 's/vmx/Intel hardware virtualization=Yes (&)/g'

                Hardware encryption=Yes (aes)
                64 bit cpu=Yes (lm)
egrep -wo 'hap|vmx|ept|vpid|npt|tpr_shadow|flexpriority|vnmi|lm|aes' /proc/cpuinfo | sort | uniq
                aes
                lm

Iiuc, Intel introduced EPT with Nehalem arch, which preceds Haswell by ~ 5 years.

Q:
Am I out of luck re: PVH with more modern Haswell? Or is there a different check I should be running ?

At present the only PVH guest is an x86 64bit PV linux.

Is this still current/true info?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.