[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 25/34] x86/mm/shadow: make it build with !CONFIG_HVM
On Tue, Aug 21, 2018 at 02:27:40AM -0600, Jan Beulich wrote: > >>> On 17.08.18 at 17:12, <wei.liu2@xxxxxxxxxx> wrote: > > @@ -423,6 +426,10 @@ const struct x86_emulate_ops *shadow_init_emulation( > > ? sizeof(sh_ctxt->insn_buf) : 0; > > > > return &hvm_shadow_emulator_ops; > > +#else > > + BUG(); > > + return NULL; > > +#endif > > } > > The sole invocation of the function sits right _after_ an is_hvm_...() > conditional (which is odd enough, but presumably a result of the > history of the code). Leaving aside a couple of labels (the goto-s to > which are, I think, provable to be HVM-only as well), that is > preceded by a shadow_mode_refcounts() check (right at the > "emulate" label). I think shadow_mode_refcounts() wants to > become constant false for !HVM, at which point the is_hvm_...() > could even go away (but there's no strict need for this to happen > right away). > > Bottom line - once again the entire function (not just its body) should > be put inside the #ifdef, and then of course the same for > shadow_continue_emulation(). > > > --- a/xen/arch/x86/mm/shadow/multi.c > > +++ b/xen/arch/x86/mm/shadow/multi.c > > @@ -2926,18 +2926,25 @@ static int sh_page_fault(struct vcpu *v, > > } > > else > > { > > +#if CONFIG_HVM > > /* Magic MMIO marker: extract gfn for MMIO address */ > > ASSERT(sh_l1e_is_mmio(sl1e)); > > + ASSERT(is_hvm_vcpu(v)); > > gpa = (((paddr_t)(gfn_x(sh_l1e_mmio_get_gfn(sl1e)))) > > << PAGE_SHIFT) > > | (va & ~PAGE_MASK); > > + perfc_incr(shadow_fault_fast_mmio); > > + SHADOW_PRINTK("fast path mmio %#"PRIpaddr"\n", gpa); > > + sh_reset_early_unshadow(v); > > + trace_shadow_gen(TRC_SHADOW_FAST_MMIO, va); > > + return handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, > > + access) > > + ? EXCRET_fault_fixed : 0; > > +#else > > + /* When HVM is not enabled, there shouldn't be MMIO marker > > */ > > + BUG(); > > At the example of this, while I agree we shouldn't reach here for PV, > can this nevertheless be the less impactful domain_crash() please? Since Tim has reviewed this patch, I will submit a follow-up patch for switching to domain_crash. Wei. > > Jan > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |