[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Unifying x86_64 / Xen init paths and reading hardware_subarch early



I will be respinning the generic Linux linker table solution [0] soon
based on hpa's feedback again now that I'm back from vacation. As I do
that though I wanted to highlight a feature I'm throwing into the
linker table solution which I am not sure many have paid close
attention to but I think is important to Xen. I'm making use of the
zero page hardware_subarch to enable us to detect if we're a specific
hypervisor solution *as early as is possible*. This has a few
implications, short term it is designed to provides a proactive
technical solution to bugs such as the cr4 shadow crash (see
5054daa285beaf706f051fbd395dc36c9f0f907f) and ensure that *new* x86
features get a proper Xen implementation proactively *or* at the very
least get annotated as unsupported properly, instead of having them
crash and later finding out. A valid example here is Kasan, which to
this day lacks proper Xen support. In the future, if the generic
linker table solution gets merged, it would mean developers would have
to *think* about if they support Xen or not at development time. It
does this in a not-disruptive way to Xen / x86_64 but most
*importantly* it does not extend pvops! This should avoid issues in
cases of developer / maintainer bandwidth, should some new features be
pushed onto Linux for x86_64 but a respective Xen solution is not
addressed, and that was not caught early in patch review, such as with
Kasan.

[0] 
https://lkml.kernel.org/r/1450217797-19295-1-git-send-email-mcgrof@xxxxxxxxxxxxxxxx

Two things I'd like to request a bit of help with and review / consideration:

1) I'd like some advice on a curious problem I've stumbled on. I'd
like to access hardware_subarch super early, and in my review with at
least two x86 folks this *should* work:

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index c913b7eb5056..9168842821c8 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -141,6 +141,7 @@ static void __init copy_bootdata(char *real_mode_data)

 asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 {
+ struct boot_params *params = (struct boot_params *)__va(real_mode_data);
  int i;

  /*
@@ -157,6 +158,8 @@ asmlinkage __visible void __init
x86_64_start_kernel(char * real_mode_data)
  (__START_KERNEL & PGDIR_MASK)));
  BUILD_BUG_ON(__fix_to_virt(__end_of_fixed_addresses) <= MODULES_END);

+ boot_params.hdr.hardware_subarch = params->hdr.hardware_subarch;
+
  cr4_init_shadow();

  /* Kill off the identity-map trampoline */

In practice today though this crashes the kernel. One does not need to
try to run Xen to test this, simply applying this change should crash
a bare metal / qemu instance. If you'd like to force a different value
for the subarch you could use this *debug patch* and just use kvm
which would set the subarch to a value not yet assigned on Linux:

http://drvbp1.linux-foundation.org/~mcgrof/patches/2016/01/15/qemu-add-subarch.patch

Simply getting away from he crash is my goal for now. The earliest I
can read the subarch as it stands is right after load_idt() on the x86
init path and I simply have no clue why! I'm told this in theory
should work. But clearly it does not. I tried running qemu with gdb
and I can't get anything sensible out of this so -- I need a bit more
x86 help.

Why do I want this? It would mean we can cover a proactive solution
all the way up to the earliest calls on Linux. Without this the
subarch becomes useful only after load_idt(). Since I'm using the
subarch to build dependency maps early on it also means that the
linker table solution becomes only useful on
x86_64_start_reservations() and not x86_64_start_kernel() which is the
first C Linux entry point for 64-bit. Having the subarch readible as
early as x86_64_start_kernel() means the linker table solution can be
used to proactively prevent issues even with discrepancies between
x86_64_start_kernel() and x86_64_start_reservations() and
xen_start_kernel(). There's another important reason listed below...

2) Provided we address 1) above it could mean it being possible to
unify *at least* the C Xen x86_64 init path and the bare metal x86_64
init paths without much code shuffling. Based on discussions at the
last Xen developer summit it seemed this was being considered and
perhaps desirable. Now the patch below would need a bit more work, but
ultimately this gives a small glance at what this could in theory
possibly look like:

http://drvbp1.linux-foundation.org/~mcgrof/patches/2015/12/15/x86-merge-x86-init-v1.patch

The xen init stuff just becomes a Xen specific subarch call. Folks
interested in this prospect are welcomed to help review or expand on
this work. If you are working on another type of unifying init
solution I'd like to hear it as well.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.