[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Detecting whether dom0 is in a VM



On 06 Jul 2023 09:02, Jan Beulich wrote:
On 05.07.2023 18:20, zithro wrote:
So I'm wondering, isn't that path enough for correct detection ?
I mean, if "/sys/class/dmi/id/sys_vendor" reports Xen (or KVM, or any
other known hypervisor), it's nested, otherwise it's on hardware ?

Is that really mandatory to use CPUID leaves ?

Let me ask the other way around: In user mode code under a non-nested
vs nested Xen, what would you be able to derive from CPUID? The
"hypervisor" bit is going to be set in both cases. (All assuming you
run on new enough hardware+Xen such that CPUID would be intercepted
even for PV.)

I'm a bit clueless about CPUID stuff, but if I understand correctly, you're essentially saying that using CPUID may not be the perfect way ? Also, I don't get why the cpuid command returns two different values, depending on the -k switch :
# cpuid -l 0x40000000
hypervisor_id (0x40000000) = "\0\0\0\0\0\0\0\0\0\0\0\0"
# cpuid -k -l 0x40000000
hypervisor_id (0x40000000) = "XenVMMXenVMM"

Yet relying on DMI is fragile, too: Along the lines of
https://lists.xen.org/archives/html/xen-devel/2022-01/msg00604.html
basically any value in there could be "inherited" from the host (i.e.
from the layer below, to be precise).

So using "/sys/class/dmi/id/sys_vendor", or simply doing "dmesg | grep DMI:" is also not perfect, as values can be inherited/spoofed by underneath hypervisor ?

The only way to be reasonably
certain is to ask Xen about its view. The raw or host featuresets
should give you this information, in the "mirror" of said respective
CPUID leave's "hypervisor" bit.

As said above, I'm clueless, can you expand please ?

But of course that still won't tell
you which _kind_ of hypervisor is the immediate next one underneath
Xen.

This then further raises the question of what use it is to know the
kind of the next level hypervisor, when multiple may be stacked on
top of one another ...

We need an answer from systemd guys, but the commiter expanded on the reasons why the change was made [1] : "the detect_vm_cpuid check was returning a VIRTUALIZATION_NONE result on non-nested dom0 (checking the log from back then I was getting No virtualization found in CPUID), but would report other CPUID-detectable hypervisors when dom0 was nested, so we still wanted to check it for this case".

Systemd is using this information to not start some services when nested, like SMART/smartmontools (which have "ConditionVirtualization=no" in their systemd unit files).
As the check now fails on baremetal dom0s, the service is not starting.

[1] https://github.com/systemd/systemd/issues/28113#issuecomment-1621559642



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.