[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2][4.15?] x86/shadow: suppress "fast fault path" optimization when running virtualized


  • To: Tim Deegan <tim@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Mon, 8 Mar 2021 13:47:45 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zTS8y9jsijYxARDDmBbM4t3EhgQDoxzfdy1ac/DDICY=; b=QYwgetabPnEXDB1lUkvSgdJQQPEmsa34hLuhu7tfY3dgxH5mSdBBqmymwH36KgVZzExtODFIrbdtoSiJbdb1qsdADMTw6eMtgPjFy6haDdlgaFGOiZmgrX+BUTZkfUt11P9RAjTaHnn2pQ0BVbrr2+ehyEldDo+zrnx9SJ92MR68aN4pXawWjWFVIFq2OXEc1xb+kUIamWkViMXKbGaAcmmXYkHhDqDIO+z1rh+NZUl5fe+X9hFSkuosKPMMbS3+3ZIe6pXrsAuhEe58BQA5s/1fsv7lkFODWIJxgDsBp7PZhCyMM8IJ9DgAVbcJUyzDP7MOjE+F7jDUqNB37VTT2w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H+aUe0IZ3fnp7BhRy2u5mqUVynnrwHW5M3NIDqJq229GIuK6qXo4AQ2EJYn+aawRZgZWngMtubFqWY2S5WPjWSqAju4SI+oXE20yeSk7BHWwZqBSIVJAOezm+nRvrUB552fjqWLYhox2Tw0MFd1/nHASMOLbCpYUDAwNYrrRu2xYwoMiaDSwklH5cHalX7oLZua+QJh6pFBHqGSV43lPWefwEDcUm54tn6XByYXi2PtCtQeTI6anlD8+XPg0tETaTWZNF7VwkOAurRSyGj4y4JFfgkv6E1qzHhx9h+qHtJHHwxQ0Gf3BmhxGNzgYnsbGtU4WbCgevuoCUxManFRBhQ==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "George Dunlap" <george.dunlap@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>
  • Delivery-date: Mon, 08 Mar 2021 13:48:42 +0000
  • Ironport-sdr: TCmCvPwbJWwufcMFe2Ew7H7HIa8SoYEWV1r3S2v/Et3OdRbtTXqJfS8a4X2N0WYVBnCkXXr0Yl NP/8Pq5vbUwCU1iSd5oKSa2WGcuNd3251aSvddxkiFO6CSh74pXBq8ve/EH468LHN7JIZFHuwk KHiyEwWExVYrkLIGzkTYPvrsvsNV9awCNxc/pVS9/Wuvn7uMWzv3ADXUJ7BXzXLH+E7A0HZyoO EwO11eW4XTzB8oiFL/warfp2MF3LMwS5Br73slmm4Q4LD4g6k0URqGCTE6b3ZJ1hABZDIQBue7 7Og=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 08/03/2021 09:25, Tim Deegan wrote:
> At 16:37 +0100 on 05 Mar (1614962224), Jan Beulich wrote:
>> We can't make correctness of our own behavior dependent upon a
>> hypervisor underneath us correctly telling us the true physical address
>> with hardware uses. Without knowing this, we can't be certain reserved
>> bit faults can actually be observed. Therefore, besides evaluating the
>> number of address bits when deciding whether to use the optimization,
>> also check whether we're running virtualized ourselves.
>>
>> Requested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> Acked-by: Tim Deegan <tim@xxxxxxx>
>
> I would consider this to be a bug in the underlying hypervisor, but I
> agree than in practice it won't be safe to rely on it being correct.

I'd argue against this being a hypervisor bug.  If anything, it is a
weakness in how x86 virtualisation works.

For booting on a single host, then yes - vMAXPHYSADDR really ought to be
the same as MAXPHYSADDR, and is what happens in the common case.

For booting in a heterogeneous pool, the only safe value is the min of
MAXPHYSADDR across the resource pool.  Anything higher, and the VM will
malfunction (get #PF[rsvd] for apparently-legal PTEs) on the smallest
pool member(s).

Address widths vary greatly between generations and SKUs, so blocking
migrate on a MAXPHYSADDR mismatch isn't a viable option.  VM migration
works in practice because native kernels don't tend to use reserved bit
optimisations in the first place.

The fault lies with Xen.  We're using a property of reserved bit
behaviour which was always going to change eventually, and can't be
levelled in common heterogeneous scenarios.

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.