[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 11/16] x86/shadow: drop is_hvm_...() where easily possible


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 28 Mar 2023 16:41:57 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VAFdX3Edbtm0Ueq/5t5W9p3vGsbOHCXBspS0C5+tT3c=; b=FO6L3Q4b/Yeu/iRTsepDFdtOHvUrkMLyae7zZPvheyBc6B8vnXL1j9crnpVqJMxus2iHRs3B/a9zddir/1aEUZG0NkSLQNBlGswf1p61C004v+Fzg+3OmpwOi+IzQ4vSh5v7BnGl0dd/C7ycXdc8BsTb7eZAFa2P3sbNM234iaqXJSHfWLZVuvSo2uSbrR1fJuu+WVw1R3e4G9HADWcfq2a3ryks648Lu3nlASBj3wZjUXjZZcJKYvCynNMketGEGVHEDMgq/NAhxU9EpaGxZpJ7opeGL8aAasg0FdfS5oEsg6LLiTZZdXBHKOZ9SACPUcjcyrwmGqtBR+CePUMyYg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NsQe9cgAiFWBkltW0ArSrCeSy7ezFfR3AkCIRRfr5UaSwEqrdv8+BAA9RFpgqLsnFZYG3CY0Iv+/buBBuPQNF3u8iX0pdVWc7c18bZ4MiaNdVLD5OacO3uwgG4Xt/8yww3bOLqa9x6Exdlv38fX5Q8fMbT+FG/f7PzdYaN2nSKOf0U3M+eu3wQ+ocWcJ+y8Kqraz6v1lY5B6doWoTN0Py48Yh6dPM0XKuFqqL0J95U3X7HBpyqU/pJ1MfSbmBuLdFOUXLdTa9H/u1Ddtq4nPiQ1bs+izBEUuXToJMQXcmFluXdcIb8DOrPlwCjX/kljhKGGrB6oyGY9A8Jb/e5GN3Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 28 Mar 2023 14:42:15 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 28.03.2023 15:57, Andrew Cooper wrote:
> On 24/03/2023 7:38 am, Jan Beulich wrote:
>> On 23.03.2023 19:18, Andrew Cooper wrote:
>>> On 22/03/2023 9:35 am, Jan Beulich wrote:
>>>> Emulation related functions are involved in HVM handling only, and in
>>>> some cases they even invoke such checks after having already done things
>>>> which are valid for HVM domains only. OOS active also implies HVM. In
>>>> sh_remove_all_mappings() one of the two checks is redundant with an
>>>> earlier paging_mode_external() one (the other, however, needs to stay).
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>>
>>>> --- a/xen/arch/x86/mm/shadow/common.c
>>>> +++ b/xen/arch/x86/mm/shadow/common.c
>>>> @@ -1522,7 +1522,7 @@ int sh_remove_all_mappings(struct domain
>>>>                 && (page->count_info & PGC_count_mask) <= 3
>>>>                 && ((page->u.inuse.type_info & PGT_count_mask)
>>>>                     == (is_special_page(page) ||
>>>> -                       (is_hvm_domain(d) && is_ioreq_server_page(d, 
>>>> page))))) )
>>>> +                       is_ioreq_server_page(d, page)))) )
>>>>              printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn
>>>>                     " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n",
>>>>                     mfn_x(gmfn), gfn_x(gfn),
>>> Out of context here needs an equivalent adjustment.
>> I'm afraid I don't seen any further is_hvm_*() in this function.
> 
> Final parameter to the printk(), calculating the i=%d diagnostic.

That one. This needs to stay, as calling is_ioreq_server_page() isn't
safe for PV domains. ioreq_domain_init(), which initializes the involved
spinlock, is called only for HVM domains. This is why I'm insisting ...

>>> But in this case, I'm not sure the commit message covers the relevant
>>> details.  ioreq servers have been made fully common since this code was
>>> written, and *that* is a better reason for dropping the predicates IMO
>>> than the redundancy with paging_mode_external().
>> How does "fully common" matter? It's still a HVM-only thing,
> 
> ioreq server info used to be in d->arch.hvm.ioreq.
> 
> The is_hvm_domain() was guarding against speculative type confusion in
> the HVM union, and should have been removed by the ARM work that made it
> common.
> 
> This isn't really related to the outer paging_mode_external().

... that the dropping that I can safely do here is solely because of
the earlier paging_mode_external(). This is irrespective to what the
check may (also) have been used for before.

>>>> --- a/xen/arch/x86/mm/shadow/oos.c
>>>> +++ b/xen/arch/x86/mm/shadow/oos.c
>>>> @@ -577,7 +577,6 @@ int sh_unsync(struct vcpu *v, mfn_t gmfn
>>>>      if ( pg->shadow_flags &
>>>>           ((SHF_page_type_mask & ~SHF_L1_ANY) | SHF_out_of_sync)
>>>>           || sh_page_has_multiple_shadows(pg)
>>>> -         || !is_hvm_vcpu(v)
>>>>           || !v->domain->arch.paging.shadow.oos_active )
>>> This is reachable for PV guests as far as I can see.  What am I missing ?
>> Well, the footnote in patch 1 ("x86/shadow: fix and improve
>> sh_page_has_multiple_shadows()") kind of explains this wrt the safety
>> of the sh_page_has_multiple_shadows() use here: Since PV guests can't
>> have OOS pages, there's no way SHF_out_of_sync could be set.
> 
> Hmm, I suppose.  We enter sh_unsync() directly from a demand write, but
> it is only meaningful when OOS is active to begin with.
> 
> Although having looked through this, there ought to be an early exit for
> oos_active even ahead of the SHADOW_PRINTK(), and the single caller of
> this function doesn't check the return value.  (This appears to be a
> common theme...)

Well, of course there's much more cleanup left than done by this series.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.