[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/4] xen/version: Drop bogus return values for XENVER_platform_parameters


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Thu, 5 Jan 2023 22:17:03 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NTeMV9JIzE4pzgd6mrGvLnoq1bkSnPsinoSCXObJHH4=; b=aM3oSUd62btlE5tQn/eXgjBvNUWwIvXEAZbMrZz0PG1RKcrcJroBo/oWqLbqEOmhbRRv0iNLmQzN/QvcPS8SRxwgOsCJCeGqwcsnK664lODR9EHj+y/ucBkBbkfwEHSQJ5NtLlf2EW37kgJwiXUNOvdQxrhSGLhCa0T3d5jqNTJ7OfFwX6azXGEnVdBJ/Fuq5/DxhtxYcIQXy+5uJBTHWchIKAteGYusDzoZcqGm761Cb09y7KjoEyuGlfcuthYZvnB+lZ3E1NuWVyM02KN1KeFbwCTJiaddYCjPcTgGk+SngEuavONwKKWWUcNeLxKJ4C/fHIpWF3zLw8t4fQv76w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DJxcXuXAL+5EWBMm9DGct1BB0rRtt59ZxIL47YJr0+93YhPepuXo27T1kmtVMCLo2xo8iFHbEoTr7+EEUl3p1xNU/lPTNWBtrGXf5CxAahUkQ3l1FcIV+NlUn8KccXf0rArmgyO/m5BsmDvSxduRPUjDzXIb9VA6MuEqMzeqLezgKMJjLzx3hytnqRML1l4geBmTv02pt3m5IVIz4OdDgqFfNtgWOwXLq4q84niixFq1vzCWT2SOPIt04FOMSIc3uVZFjdek07PZOOu9qjC2vWCN65eS3TYdR4N7vaHQ77nAcAFtWeCaJBy+rHECIintEJenRANP8rf/EA8MsakE/Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: George Dunlap <George.Dunlap@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 05 Jan 2023 22:17:40 +0000
  • Ironport-data: A9a23:IeHSUKCcboJAFhVW/+Piw5YqxClBgxIJ4kV8jS/XYbTApD8h0TZSn TcWXDyOP/yPZTPyc90iaYWzo0lQup7cxoVrQQY4rX1jcSlH+JHPbTi7wuUcHAvJd5GeExg3h yk6QoOdRCzhZiaE/n9BCpC48T8nk/nNHuCnYAL9EngZbRd+Tys8gg5Ulec8g4p56fC0GArIs t7pyyHlEAbNNwVcbyRFtcpvlDs15K6p4GpA4wRlDRx2lAS2e0c9Xcp3yZ6ZdxMUcqEMdsamS uDKyq2O/2+x13/B3fv8z94X2mVTKlLjFVDmZkh+AsBOsTAbzsAG6Y4pNeJ0VKtio27hc+ada jl6ncfYpQ8BZsUgkQmGOvVSO3kW0aZuoNcrLZUj2CA6IoKvn3bEmp1T4E8K0YIw5PhaJGAf9 fEhLB8DTkumhN2Ny7SEY7w57igjBJGD0II3nFhFlGucIdN4BJfJTuPN+MNS2yo2ioZWB/HCa sEFaD1pKhPdfxlIPVRRA5U79AuqriCnL3sE9xTI9OxuvDW7IA9ZidABNPL8fNCQSNoTtUGfv m/cpEzyAw0ANczZwj2Amp6prr6UzX2gBtlDfFG+3sBw3XGxgVUXM0ctV3CSjqC+t0y5Wc0Kf iT4/QJr98De7neDTNPwQhm5q36spQMHVpxbFOhSwBqW1qPe7gKdB24FZj1MctorsIkxXzNC/ kCNt8PkA3poqrL9YXCA8raZqxuiNC5TKnUNDQcUQA1A79T9rYUbihPUUs0lAKOzlsfyGzz73 3aNtidWulkIpcsC1qH+8VWZhTup/8LNVlRsuViRWX+55ARkYoLjf5av9VXQ8fdHKsCeU0WFu 38H3cOZ6YjiEK2wqcBEe81VdJnB2hpPGGS0bYJHd3X5ywmQxg==
  • Ironport-hdrordr: A9a23:rfbaWKDTQWlvbLrlHem955DYdb4zR+YMi2TDtnoddfUxSKfzqy nApoV56faKskdyZJhNo7690cq7LU80l6QU3WB5B97LYOCMggSVxe9ZjLcKygeQfhHDyg==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHZH69XYhd2rMGoWE+asSBKOPPG4q6OdxyAgAA2ioCAAMm+gIAA8B2A
  • Thread-topic: [PATCH 3/4] xen/version: Drop bogus return values for XENVER_platform_parameters

On 05/01/2023 7:57 am, Jan Beulich wrote:
> On 04.01.2023 20:55, Andrew Cooper wrote:
>> On 04/01/2023 4:40 pm, Jan Beulich wrote:
>>> On 03.01.2023 21:09, Andrew Cooper wrote:
>>>> A split in virtual address space is only applicable for x86 PV guests.
>>>> Furthermore, the information returned for x86 64bit PV guests is wrong.
>>>>
>>>> Explain the problem in version.h, stating the other information that PV 
>>>> guests
>>>> need to know.
>>>>
>>>> For 64bit PV guests, and all non-x86-PV guests, return 0, which is strictly
>>>> less wrong than the values currently returned.
>>> I disagree for the 64-bit part of this. Seeing Linux'es exposure of the
>>> value in sysfs I even wonder whether we can change this like you do for
>>> HVM. Who knows what is being inferred from the value, and by whom.
>> Linux's sysfs ABI isn't relevant to us here.  The sysfs ABI says it
>> reports what the hypervisor presents, not that it will be a nonzero number.
> It effectively reports the hypervisor (virtual) base address there. How
> can we not care if something (kexec would come to mind) would be using
> it for whatever purpose.

What about kexec do you think would care?

The only thing kexec-tools cares about is XENVER_capabilities, but even
that's not actually correct for figuring out whether Xen can do kexec
transitions to ELF64/32.

> And thinking of it, the tool stack has uses,
> too. Assuming you audited them, did you consider removing dead uses in
> a prereq patch (and discuss the effects on live ones in the description)?

There is only one toolstack use I can spot which is non-informational,
and it's broken AFAICT.

`xl dump-core` writes out a header which includes this metadata, but it
takes dom0's value, not domU's.  (Not that this is relevant AFAICT,
because the M2P is handled specially anyway.)


Most XENVER_* information is global (and by this, I mean invariant and
non-caller dependent, outside of livepatching.)

XENVER_guest_handle is caller-variant, but the toolstack has proper
interfaces to get/set this value.

XENVER_platform_parameters (and XENVER_get_features for that matter) are
caller-variant, and the toolstack has no way to get domU's view of this
data.


Every instance (well - this is the only interesting one) of the use of
XENVER_platform_parameters I can find is broken, even in the Xen code.

>>>> --- a/xen/include/public/version.h
>>>> +++ b/xen/include/public/version.h
>>>> @@ -42,6 +42,26 @@ typedef char xen_capabilities_info_t[1024];
>>>>  typedef char xen_changeset_info_t[64];
>>>>  #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
>>>>  
>>>> +/*
>>>> + * This API is problematic.
>>>> + *
>>>> + * It is only applicable to guests which share pagetables with Xen (x86 PV
>>>> + * guests), and is supposed to identify the virtual address split between
>>>> + * guest kernel and Xen.
>>>> + *
>>>> + * For 32bit PV guests, it mostly does this, but the caller needs to know 
>>>> that
>>>> + * Xen lives between the split and 4G.
>>>> + *
>>>> + * For 64bit PV guests, Xen lives at the bottom of the upper canonical 
>>>> range.
>>>> + * This previously returned the start of the upper canonical range (which 
>>>> is
>>>> + * the userspace/Xen split), not the Xen/kernel split (which is 8TB 
>>>> further
>>>> + * on).  This now returns 0 because the old number wasn't correct, and
>>>> + * changing it to anything else would be even worse.
>>> Whether the guest runs user mode code in the low or high half (or in yet
>>> another way of splitting) isn't really dictated by the PV ABI, is it?
>> No, but given a choice of reporting the thing which is an architectural
>> boundary, or the one which is the actual split between the two adjacent
>> ranges, reporting the architectural boundary is clearly the unhelpful thing.
> Hmm. To properly parallel the 32-bit variant, a [start,end] range would need
> exposing for 64-bit, rather than exposing nothing.

The 32-bit version is a start/end pair, but with end being implicit at
the 4G architectural boundary.

If we were doing 64-bit from scratch, then reporting end would have been
sensible, because for 64-bit, start is the architectural boundary which
can be implicit.

But there is no such thing as a 64bit PV guest with any (useful) idea of
a variable split, because this number has been junk for the entire
lifetime of 64bit PV guests.  In particular, ...

>>>> + * For all guest types using hardware virt extentions, Xen is not mapped 
>>>> into
>>>> + * the guest kernel virtual address space.  This now return 0, where it
>>>> + * previously returned unrelated data.
>>>> + */
>>>>  #define XENVER_platform_parameters 5
>>>>  struct xen_platform_parameters {
>>>>      xen_ulong_t virt_start;
>>> ... the field name tells me that all that is being conveyed is the virtual
>>> address of where the hypervisor area starts.
>> IMO, it doesn't matter what the name of the field is.  It dates from the
>> days when 32bit PV was the only type of guest.
>>
>> 32bit PV guests really do have a variable split, so the guest kernel
>> really does need to get this value from Xen.
>>
>> The split for 64bit PV guests is compile-time constant, hence why 64bit
>> PV kernels don't care.
> ... once we get to run Xen in 5-level mode, 4-level PV guests could also
> gain a variable split: Like for 32-bit guests now, only the r/o M2P would
> need to live in that area, and this may well occupy less than the full
> range presently reserved for the hypervisor.

... you can't do this, because it only works for guests which have
chosen to find the M2P using XENMEM_machphys_mapping (e.g. Linux), and
doesn't for e.g. MiniOS which does:

#define machine_to_phys_mapping ((unsigned long *)HYPERVISOR_VIRT_START)

in fact, looking at this, MiniOS is also broken as a 32bit PV dom0,
because it hardcodes __MACH2PHYS_VIRT_START in the case where the split
really is variable.


Its only PV guests which are LA57 aware which can possibly benefit from
a variable position M2P, and only because that will be a new ELFNOTE
protocol.

>
>> For compat HVM, it happens to pick up the -1 from:
>>
>> #ifdef CONFIG_PV32
>>     HYPERVISOR_COMPAT_VIRT_START(d) =
>>         is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
>> #endif
>>
>> in arch_domain_create(), whereas for non-compat HVM, it gets a number in
>> an address space it has no connection to in the slightest.  ARM guests
>> end up getting XEN_VIRT_START (== 2M) handed back, but this absolutely
>> an internal detail that guests have no business knowing.
> Well, okay, this looks to be good enough an argument to make the adjustment
> you propose for !PV guests.

Right, HVM (on all architectures) is very cut and dry.

But it feels wrong to not address the PV64 issue at the same time
because it is similar level of broken, despite there being (in theory) a
legitimate need for a PV guest kernel to know it.

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.