[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 8/8] x86: enable CQM monitoring for each domain RMID



> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: Thursday, November 21, 2013 10:19 PM
> To: Xu, Dongxiao
> Cc: xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v2 8/8] x86: enable CQM monitoring for each
> domain RMID
> 
> On 21/11/13 07:20, dongxiao.xu@xxxxxxxxx wrote:
> > From: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
> >
> > If the CQM service is attached to a domain, its related RMID will be set
> > to hardware for monitoring when the domain's vcpu is scheduled in. When
> > the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
> > for monitoring.
> >
> > Signed-off-by: Jiongxi Li <jiongxi.li@xxxxxxxxx>
> > Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
> > ---
> >  xen/arch/x86/domain.c           |    5 +++++
> >  xen/arch/x86/pqos.c             |   21 ++++++++++++++++++++-
> >  xen/include/asm-x86/msr-index.h |    1 +
> >  xen/include/asm-x86/pqos.h      |    1 +
> >  4 files changed, 27 insertions(+), 1 deletion(-)
> >
> > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> > index 9725649..1eda0ab 100644
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -1372,6 +1372,8 @@ static void __context_switch(void)
> >      {
> >          memcpy(&p->arch.user_regs, stack_regs,
> CTXT_SWITCH_STACK_BYTES);
> >          vcpu_save_fpu(p);
> > +        if ( system_supports_cqm() )
> > +            cqm_assoc_rmid(0);
> 
> So actions from the idle domain are accounted against the previously
> scheduled vcpu?

No. Considering the following cases:
 - Context switch from a normal domain vcpu (p) to an idle domain vcpu (n), 
then we will associate RMID=0 to the CPU hardware on ctxt_switch_from(p), so 
that idle domain is accounted to RMID=0;
 - Context switch from an idle domain vcpu (p) to a normal domain vcpu (n), 
then we will associate the domain's RMID on ctxt_switch_to(n).

> 
> >          p->arch.ctxt_switch_from(p);
> >      }
> >
> > @@ -1396,6 +1398,9 @@ static void __context_switch(void)
> >          }
> >          vcpu_restore_fpu_eager(n);
> >          n->arch.ctxt_switch_to(n);
> > +
> > +        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid >
> 0 )
> 
> n->domain->arch.pqos_cqm_rmid can only be greater than 0 if the system
> already supports cqm()

This should be changed on v2 patch, sorry.

> 
> > +            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
> 
> What happens to subsequent Xen accesses before returning to the guest?
> 
> What happens for Xen accesses in interrupt handlers?

The measurement is not that accurate for CQM feature.
CQM is somewhat like xentop, where xentop doesn't exactly differentiate the 
context switch cost and interrupt handling cost, so we adopt the similar logics 
to CQM.

> 
> >      }
> >
> >      gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
> > diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
> > index d95784c..0cec172 100644
> > --- a/xen/arch/x86/pqos.c
> > +++ b/xen/arch/x86/pqos.c
> > @@ -27,6 +27,7 @@
> >  #include <asm/pqos.h>
> >
> >  static bool_t pqos_enabled = 1;
> > +static unsigned int rmid_bitwidth;
> >  boolean_param("pqos", pqos_enabled);
> >
> >  static void read_qm_data(void *arg)
> > @@ -43,6 +44,14 @@ static void get_generic_qm_info(struct qm_element
> *qm_element)
> >      on_selected_cpus(cpumask_of(cpu), read_qm_data, qm_element, 1);
> >  }
> >
> > +void cqm_assoc_rmid(uint16_t rmid)
> > +{
> > +    uint64_t val;
> > +    uint64_t rmid_mask = ~(~0ull << rmid_bitwidth);
> > +    rdmsrl(MSR_IA32_PQR_ASSOC, val);
> > +    wrmsrl(MSR_IA32_PQR_ASSOC, (val & ~(rmid_mask)) | (rmid &
> rmid_mask));
> > +}
> > +
> >  unsigned int cqm_res_count = 0;
> >  unsigned int cqm_upscaling_factor = 0;
> >  bool_t cqm_enabled = 0;
> > @@ -78,13 +87,23 @@ static void __init init_cqm(void)
> >  static void __init init_qos_monitor(void)
> >  {
> >      unsigned int qm_features;
> > -    unsigned int eax, ebx, ecx;
> > +    unsigned int eax, ebx, ecx, i = 0;
> >
> >      if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )
> >          return;
> >
> >      cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
> >
> > +    while (1)
> > +    {
> > +        if ( (1 << i) - 1 >= ebx )
> > +        {
> > +            rmid_bitwidth = i;
> > +            break;
> > +        }
> > +        i++;
> > +    }
> 
> Is this some approximation a logarithm ?
> 
> Would it not be better to store rmid_mask (which would appear to be ebx)
> rather than recalculate it from the constant rmid_bitwidth every time
> cqm_assoc_rmid() is called ?

OK.

> 
> > +
> >      if ( qm_features & QOS_MONITOR_TYPE_L3 )
> >          init_cqm();
> >  }
> > diff --git a/xen/include/asm-x86/msr-index.h
> b/xen/include/asm-x86/msr-index.h
> > index 46ef165..45f4918 100644
> > --- a/xen/include/asm-x86/msr-index.h
> > +++ b/xen/include/asm-x86/msr-index.h
> > @@ -491,5 +491,6 @@
> >  /* Platform QoS register */
> >  #define MSR_IA32_QOSEVTSEL             0x00000c8d
> >  #define MSR_IA32_QMC                   0x00000c8e
> > +#define MSR_IA32_PQR_ASSOC             0x00000c8f
> >
> >  #endif /* __ASM_MSR_INDEX_H */
> > diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
> > index 5c86c5d..2bb983b 100644
> > --- a/xen/include/asm-x86/pqos.h
> > +++ b/xen/include/asm-x86/pqos.h
> > @@ -51,5 +51,6 @@ unsigned int get_cqm_count(void);
> >  unsigned int get_cqm_avail(void);
> >  void get_cqm_info(uint32_t rmid, cpumask_t cpu_cqmdata_map,
> >                    struct xen_domctl_getdomcqminfo *info);
> > +void cqm_assoc_rmid(uint16_t rmid);
> 
> rmid is inconsistently an int, u32 and u16 across this patch series.
> Can this be cleaned up.

OK, will do the cleanup.

Thanks,
Dongxiao

> 
> ~Andrew
> 
> >
> >  #endif


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.