[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 11/16] x86/shadow: drop is_hvm_...() where easily possible


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Tue, 28 Mar 2023 14:57:50 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JbCikyANWzYPrnlxVeMzdao1CjUOcM/u5LBLhCFDbwE=; b=GzToB6KCbtnTXVpzxFmehBOXoLGtA+Mf7L+HuUi2GtJOu2WhDxp9Y7zrzCdUDJ+/2jrATHMW/V5/u0xn27ZQ4eFSS1OiMoiyP0JTEgcD4xxhAiiscKQTioXmZ8X2lrKuQCKWz0GLXxhoUGtUnY1MvOxrSDuLNwa4cF6DKbrNjRD8T//qVnLS9nPd9AqXl2XwglQGVimST0czQMdTKXgX2O4vXZQ00EJyi5ceL+TZUEo7iNcgsXvL+j4CjJn/v5AnCQ8FdmHfupohsuIJj4MtDj9leUyPRlbs67SR8vyk+LQJU7Ww+pjNOYlpOCVkfE0IRSfty5cnR+FwnmWQvNKpDg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=izn0JYV0M0148WPEOH4hmLICUkECxzT0GaOs3nZvw9ExFqW+MZmtY8rmsg/N4oxEFL5qmiTdaqhJD00jPN3JrdOExUMVjm7dqJFR09MaC+2UUh0Ad9M0eRfYFN8i96vN0J4bsvdV2JIdh1R2tZHb+ptv6cYErgTUEJMRkR+bW/tfDaammuHFhDPH0hmF5MUJKzNOCykAw93UZHFEYUvcBOoUXRPpJfLl2NUJwFX5HLIv17c7stWW4dge3bcJ+5O++YB9t0MMoxDZ+gRBWxipqOGw1mgMRABJx5HsP3k+aO1i5E+ivsVuRS8Xa4vJLZX8DSheLu1sGgZMbyXE3WxEMA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 28 Mar 2023 13:58:31 +0000
  • Ironport-data: A9a23:gJOTFKqdJdIdhN3h4rlTdcjNZHReBmIpZBIvgKrLsJaIsI4StFCzt garIBmPaPmNZmHyetFwaNu28E1VupDTytVhGwdopSlmEisX85uZCYyVIHmrMnLJJKUvbq7FA +Y2MYCccZ9uHhcwgj/3b9ANeFEljfngqoLUUbKCYWYpA1c/Ek/NsDo788YhmIlknNOlNA2Ev NL2sqX3NUSsnjV5KQr40YrawP9UlKm06WNwUmAWP6gR5weFzSlNVvrzGInqR5fGatgMdgKFb 76rIIGRpgvx4xorA9W5pbf3GmVirmn6ZFXmZtJ+AsBOszAazsAA+v9T2Mk0MC+7vw6hjdFpo OihgLTrIesf0g8gr8xGO/VQO3kW0aSrY9YrK1Dn2SCY5xWun3cBX5yCpaz5VGEV0r8fPI1Ay RAXAGoxYh2frM+Q+YiQFOZBj+QcPsfJZKpK7xmMzRmBZRonabbqZv2QoPN9h3I3jM0IGuvCb c0EbzYpdA7HfxBEJlYQDtQ5gfusgX78NTZfrTp5p4JuuzSVkFM3jearbIG9lt+iHK25mm6xo G7c8nu/KRYdLNGFkhKO8262h/+JliT+MG4XPOTgq6E12QDNmwT/DjUPXnmZp8Ogl3eiXpF8E 1UJ5gAngqwboRnDot7VGkfQTGS/lhwWVsdUEuY6wBqQ0aeS6AGcbkAbShZRZdpgs9U5LRQ21 1qhj97vQzt1v9W9WX+bs7uZsz62ESwUNnMZIz8JSxMf5Nvuq511iQjAJuuPC4awh9zxXD31m DaDqXBkg61J1ZJQkaKm4VrAnjSg4IDTSRI47RnWWWTj6R5lYImiZMqj7l2zAet8Ebt1h2Kp5 BAs8/VyJshUZX1RvERhmNkwIYw=
  • Ironport-hdrordr: A9a23:/e3PUa6IZ3sFLjSK0wPXwA/XdLJyesId70hD6qhwISYlFPBw9v rPoB1/73TJYVkqOU3I9erwXZVoIkmsjKKdg7NhRYtKNTOO0ATHEGgL1/qY/9SKIVybygcy79 YGT4FOTPDLIBxdq+2S2mWF+tAbquWvweSTqd2b5VFLYiFFV5pJ0ideLUKgPnAefngkObMJUJ Wd48ZcpymtYx0sA/iTDGUeQu7Gq8bqufvdEHw7OyI=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 24/03/2023 7:38 am, Jan Beulich wrote:
> On 23.03.2023 19:18, Andrew Cooper wrote:
>> On 22/03/2023 9:35 am, Jan Beulich wrote:
>>> Emulation related functions are involved in HVM handling only, and in
>>> some cases they even invoke such checks after having already done things
>>> which are valid for HVM domains only. OOS active also implies HVM. In
>>> sh_remove_all_mappings() one of the two checks is redundant with an
>>> earlier paging_mode_external() one (the other, however, needs to stay).
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>
>>> --- a/xen/arch/x86/mm/shadow/common.c
>>> +++ b/xen/arch/x86/mm/shadow/common.c
>>> @@ -1522,7 +1522,7 @@ int sh_remove_all_mappings(struct domain
>>>                 && (page->count_info & PGC_count_mask) <= 3
>>>                 && ((page->u.inuse.type_info & PGT_count_mask)
>>>                     == (is_special_page(page) ||
>>> -                       (is_hvm_domain(d) && is_ioreq_server_page(d, 
>>> page))))) )
>>> +                       is_ioreq_server_page(d, page)))) )
>>>              printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn
>>>                     " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n",
>>>                     mfn_x(gmfn), gfn_x(gfn),
>> Out of context here needs an equivalent adjustment.
> I'm afraid I don't seen any further is_hvm_*() in this function.

Final parameter to the printk(), calculating the i=%d diagnostic.

>> But in this case, I'm not sure the commit message covers the relevant
>> details.  ioreq servers have been made fully common since this code was
>> written, and *that* is a better reason for dropping the predicates IMO
>> than the redundancy with paging_mode_external().
> How does "fully common" matter? It's still a HVM-only thing,

ioreq server info used to be in d->arch.hvm.ioreq.

The is_hvm_domain() was guarding against speculative type confusion in
the HVM union, and should have been removed by the ARM work that made it
common.

This isn't really related to the outer paging_mode_external().

>> That said...  I'm not sure the logic here is correct any more.  It used
>> to be the case that ioreq pages were in the p2m, but they're outside of
>> the p2m these days, so don't see how there can be any interaction with
>> unexpected refcounts any more.
>>
>> I suspect that one way or another, this change wants to be in a separate
>> patch.
> I think that if there are further adjustments to make (like dropping
> is_ioreq_server_page() altogether, as you appear to suggest), that would
> want to be in a separate patch, but the change as done fully fits the
> given justification. (Of course in such a patch both _could_ also be
> dropped at the same time.)

I'd still suggest doing it all separately.  It's sufficiently unrelated
to the justification for the other hunks of the patch.

>>> --- a/xen/arch/x86/mm/shadow/oos.c
>>> +++ b/xen/arch/x86/mm/shadow/oos.c
>>> @@ -577,7 +577,6 @@ int sh_unsync(struct vcpu *v, mfn_t gmfn
>>>      if ( pg->shadow_flags &
>>>           ((SHF_page_type_mask & ~SHF_L1_ANY) | SHF_out_of_sync)
>>>           || sh_page_has_multiple_shadows(pg)
>>> -         || !is_hvm_vcpu(v)
>>>           || !v->domain->arch.paging.shadow.oos_active )
>> This is reachable for PV guests as far as I can see.  What am I missing ?
> Well, the footnote in patch 1 ("x86/shadow: fix and improve
> sh_page_has_multiple_shadows()") kind of explains this wrt the safety
> of the sh_page_has_multiple_shadows() use here: Since PV guests can't
> have OOS pages, there's no way SHF_out_of_sync could be set.

Hmm, I suppose.  We enter sh_unsync() directly from a demand write, but
it is only meaningful when OOS is active to begin with.

Although having looked through this, there ought to be an early exit for
oos_active even ahead of the SHADOW_PRINTK(), and the single caller of
this function doesn't check the return value.  (This appears to be a
common theme...)

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.