[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 11/16] x86/shadow: drop is_hvm_...() where easily possible
On 24/03/2023 7:38 am, Jan Beulich wrote: > On 23.03.2023 19:18, Andrew Cooper wrote: >> On 22/03/2023 9:35 am, Jan Beulich wrote: >>> Emulation related functions are involved in HVM handling only, and in >>> some cases they even invoke such checks after having already done things >>> which are valid for HVM domains only. OOS active also implies HVM. In >>> sh_remove_all_mappings() one of the two checks is redundant with an >>> earlier paging_mode_external() one (the other, however, needs to stay). >>> >>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >>> >>> --- a/xen/arch/x86/mm/shadow/common.c >>> +++ b/xen/arch/x86/mm/shadow/common.c >>> @@ -1522,7 +1522,7 @@ int sh_remove_all_mappings(struct domain >>> && (page->count_info & PGC_count_mask) <= 3 >>> && ((page->u.inuse.type_info & PGT_count_mask) >>> == (is_special_page(page) || >>> - (is_hvm_domain(d) && is_ioreq_server_page(d, >>> page))))) ) >>> + is_ioreq_server_page(d, page)))) ) >>> printk(XENLOG_G_ERR "can't find all mappings of mfn %"PRI_mfn >>> " (gfn %"PRI_gfn"): c=%lx t=%lx s=%d i=%d\n", >>> mfn_x(gmfn), gfn_x(gfn), >> Out of context here needs an equivalent adjustment. > I'm afraid I don't seen any further is_hvm_*() in this function. Final parameter to the printk(), calculating the i=%d diagnostic. >> But in this case, I'm not sure the commit message covers the relevant >> details. ioreq servers have been made fully common since this code was >> written, and *that* is a better reason for dropping the predicates IMO >> than the redundancy with paging_mode_external(). > How does "fully common" matter? It's still a HVM-only thing, ioreq server info used to be in d->arch.hvm.ioreq. The is_hvm_domain() was guarding against speculative type confusion in the HVM union, and should have been removed by the ARM work that made it common. This isn't really related to the outer paging_mode_external(). >> That said... I'm not sure the logic here is correct any more. It used >> to be the case that ioreq pages were in the p2m, but they're outside of >> the p2m these days, so don't see how there can be any interaction with >> unexpected refcounts any more. >> >> I suspect that one way or another, this change wants to be in a separate >> patch. > I think that if there are further adjustments to make (like dropping > is_ioreq_server_page() altogether, as you appear to suggest), that would > want to be in a separate patch, but the change as done fully fits the > given justification. (Of course in such a patch both _could_ also be > dropped at the same time.) I'd still suggest doing it all separately. It's sufficiently unrelated to the justification for the other hunks of the patch. >>> --- a/xen/arch/x86/mm/shadow/oos.c >>> +++ b/xen/arch/x86/mm/shadow/oos.c >>> @@ -577,7 +577,6 @@ int sh_unsync(struct vcpu *v, mfn_t gmfn >>> if ( pg->shadow_flags & >>> ((SHF_page_type_mask & ~SHF_L1_ANY) | SHF_out_of_sync) >>> || sh_page_has_multiple_shadows(pg) >>> - || !is_hvm_vcpu(v) >>> || !v->domain->arch.paging.shadow.oos_active ) >> This is reachable for PV guests as far as I can see. What am I missing ? > Well, the footnote in patch 1 ("x86/shadow: fix and improve > sh_page_has_multiple_shadows()") kind of explains this wrt the safety > of the sh_page_has_multiple_shadows() use here: Since PV guests can't > have OOS pages, there's no way SHF_out_of_sync could be set. Hmm, I suppose. We enter sh_unsync() directly from a demand write, but it is only meaningful when OOS is active to begin with. Although having looked through this, there ought to be an early exit for oos_active even ahead of the SHADOW_PRINTK(), and the single caller of this function doesn't check the return value. (This appears to be a common theme...) ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |