[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed
> -----Original Message----- > From: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> > Sent: 27 November 2019 16:32 > To: Jan Beulich <jbeulich@xxxxxxxx>; Durrant, Paul <pdurrant@xxxxxxxxxx> > Cc: Grall, Julien <jgrall@xxxxxxxxxx>; Andrew Cooper > <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>; Jun > Nakajima <jun.nakajima@xxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>; Wei > Liu <wl@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx > Subject: Re: [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the > domain is destroyed > > On 11/27/19 10:44 AM, Jan Beulich wrote: > > On 27.11.2019 13:00, Paul Durrant wrote: > >> --- a/xen/arch/x86/cpu/vpmu.c > >> +++ b/xen/arch/x86/cpu/vpmu.c > >> @@ -479,6 +479,8 @@ static int vpmu_arch_initialise(struct vcpu *v) > >> > >> if ( ret ) > >> printk(XENLOG_G_WARNING "VPMU: Initialization failed for > %pv\n", v); > >> + else > >> + vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED); > > That won't work I think. > > On Intel the context is allocated lazily for HVM/PVH guests during the > first MSR access. For example: > > core2_vpmu_do_wrmsr() -> > core2_vpmu_msr_common_check()): > if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) && > !core2_vpmu_alloc_resource(current) ) > return 0; > > For PV guests the context *is* allocated from vmx_vpmu_initialise(). > > I don't remember why only PV does eager allocation but I think doing it > for all guests would make code much simpler and then this patch will be > correct. > Ok. Simpler if I leave setting the flag in the implementation code. I think clearing it in vcpu_arch_destroy() would still be correct in all cases. Paul > -boris > > > >> > >> return ret; > >> } > >> @@ -576,11 +578,36 @@ static void vpmu_arch_destroy(struct vcpu *v) > >> > >> vpmu->arch_vpmu_ops->arch_vpmu_destroy(v); > >> } > >> + > >> + vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED); > >> } > > Boris, > > > > I'd like to ask that you comment on this part of the change at > > least, as I seem to vaguely recall that things were intentionally > > not done this way originally. > > > > Paul, > > > > everything else looks god to me now. > > > > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |