[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [Notes for xen summit 2018 design session] Reworking x86 Xen, current status and future plan

Thank you to Florian from NEC for writing these up. They are incomplete, 
because we only realized after some time that we didn't have anybody 
taking notes. If anyone remembers the earlier parts, please augment

## PV shim

got accelerated because of meltdown. was considered for short time to be 
the best possible fix for meltdown.

why would anyone use PV? Unikernel support. Mirage had an interesting 
extension: a special hypercall after boot that would disallow a lot of 
hypercall subsequently (RESTRICT support). But that doesn't really help 
against a malicious VM that never issues that call.

Will PV go away then?. Andrew: definitely not, because Xenserver needs 
it. Plus, PV has *significant* (orders of magnitude) performance 
advantages in some situations (hypercall overhead).
Disabling PV support is mostly to reduce the attack surface of the L0 
hypervisor. For example, for certification of a smaller codebase.

Is there memory ballooning support for PV shim? Andrew: ballooning is 
always a bit hacky, but it works about as well as it does without PV shim.

How is scheduling done? Currently there's just simple pinning inside the 
shim. It still uses crediit1 scheduler though, because there is an 
apparent bug with the null scheduler (which could be used otherwise), 
where after some time, it doesn't schedule VCPUs any more. Dario and 
Andrew say they are aware of either one or two bugs, and they will have 
to look into how to fix them. One is related to CPU hotplug (which uses 
the same codepath as bringing VPUs up in the PV shim; Dario says there 
was a hacky fix for that at some point, but that one was deemed 
not upstreamable, and a proper solution is needed)

There is a side discussion about whether Xen can be built without the 
credit-1 scheduler.
For reasons depending on how kconfig works, it seems it's impossible to 
remove the default scheduler with the way how kconfig is currently set 
up. It's not 100% clear whether this can be fixed, or whether this is 
one of the small annoying kconfig limitations, in which case maybe the 
new kconfig maintainer can help with that.

## PV - HVM split
COMPAT_ interfaces: This is currently shared between PV and HVM. Should 
it be split? No, because it's literally the same interface for both. 
There is also no reasonable split that could be done between 32 and 
64-bit, because 64-bit generally boots in 32-bit and will issue 
hypercalls before switching to 64-bit mode; and even if not, a 64-bit 
guest can always drop into 32-bit mode.

PV is mostly done, trap handler and emulations is done.
HVM part is more complicated. There's lots of assumptions in the MM guest 
in HVM guests.

Do we want support compiling DM_OPs out, so that in a shim scenario, L0 
can only run PVH guests? Sure, we could do that, if somebody wants to do 

By the way, for ARM: the virtualization is effectively PVH, but Xen 
thinks it's PV. Stefano: this seems to have historic reasons because 
there originally was a HVM implementation, and the PVH ended up being 
considered PV by Xen. All of this should be fixed, or we might run into 
problems once the PV-HVM split is fully done

## /proc/xen and /sys/hypervisor
Side discussion after the end: /proc/xen is going to go away eventually 
/sys/hypervisor isn't functionally complete and needs more information.
It should be behind a kernel kconfig option, though, so that kernels 
running exclusively as guests can disable this.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.