[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] x86/mem_sharing: support forks with active vPMU state



On Thu, Jul 21, 2022 at 8:03 AM Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx> wrote:
>
> On 20/07/2022 19:47, Tamas K Lengyel wrote:
> > diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
> > index 9bacc02ec1..4c76e24551 100644
> > --- a/xen/arch/x86/cpu/vpmu_amd.c
> > +++ b/xen/arch/x86/cpu/vpmu_amd.c
> > @@ -518,6 +518,14 @@ static int cf_check svm_vpmu_initialise(struct vcpu *v)
> >      return 0;
> >  }
> >
> > +#ifdef CONFIG_MEM_SHARING
> > +static int cf_check amd_allocate_context(struct vcpu *v)
> > +{
> > +    ASSERT_UNREACHABLE();
>
> What makes this unreachable?
>
> I know none of this is tested on AMD, but it is in principle reachable I
> think.
>
> I'd just leave this as return 0.  It will be slightly less rude to
> whomever adds forking support on AMD.

The only caller is the vm fork route and vm forks are explicitly only
available on Intel (see mem_sharing_control). So this is unreachable
and IMHO should be noted as such.

>
> > +    return 0;
> > +}
> > +#endif
> > +
> >  static const struct arch_vpmu_ops __initconst_cf_clobber amd_vpmu_ops = {
> >      .initialise = svm_vpmu_initialise,
> >      .do_wrmsr = amd_vpmu_do_wrmsr,
> > @@ -527,6 +535,10 @@ static const struct arch_vpmu_ops 
> > __initconst_cf_clobber amd_vpmu_ops = {
> >      .arch_vpmu_save = amd_vpmu_save,
> >      .arch_vpmu_load = amd_vpmu_load,
> >      .arch_vpmu_dump = amd_vpmu_dump,
> > +
> > +#ifdef CONFIG_MEM_SHARING
> > +    .allocate_context = amd_allocate_context
>
> Trailing comma please, and in the Intel structure.

Ack

> > +#endif
> >  };
> >
> >  static const struct arch_vpmu_ops *__init common_init(void)
> > diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
> > index 8612f46973..01d4296485 100644
> > --- a/xen/arch/x86/cpu/vpmu_intel.c
> > +++ b/xen/arch/x86/cpu/vpmu_intel.c
> > @@ -282,10 +282,17 @@ static inline void __core2_vpmu_save(struct vcpu *v)
> >      for ( i = 0; i < fixed_pmc_cnt; i++ )
> >          rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
> >      for ( i = 0; i < arch_pmc_cnt; i++ )
> > +    {
> >          rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
> > +        rdmsrl(MSR_P6_EVNTSEL(i), xen_pmu_cntr_pair[i].control);
> > +    }
> >
> >      if ( !is_hvm_vcpu(v) )
> >          rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
> > +    /* Save MSR to private context to make it fork-friendly */
> > +    else if ( mem_sharing_enabled(v->domain) )
> > +        vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL,
> > +                           &core2_vpmu_cxt->global_ctrl);
>
> /sigh.  So we're also not using the VMCS perf controls either.
>
> That wants fixing too, but isn't a task for now.

It does get saved and swapped on vmexit but we don't want to do this
vmx_read/vmx_write in the mem_sharing codebase. It's much cleaner if
this is saved into the vpmu context structure and reloaded from there,
so we can just do a memcpy in mem_sharing without having to know the
details.

> Everything else LGTM.

Cheers!
Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.