[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] tools: add total/local memory bandwith monitoring



On Mon, Jan 05, 2015 at 12:39:42PM +0000, Wei Liu wrote:
> On Tue, Dec 23, 2014 at 04:54:39PM +0800, Chao Peng wrote:
> [...]
> > +static int libxl__psr_cmt_get_mem_bandwidth(libxl__gc *gc, uint32_t domid,
> > +    xc_psr_cmt_type type, uint32_t socketid, uint32_t *bandwidth)
> > +{
> > +    uint64_t sample1, sample2;
> > +    uint32_t upscaling_factor;
> > +    int rc;
> > +
> > +    rc = libxl__psr_cmt_get_l3_monitoring_data(gc, domid,
> > +                    type, socketid, &sample1);
> > +    if (rc < 0)
> > +        return ERROR_FAIL;
> > +
> > +    usleep(10000);
> > +
> > +    rc = libxl__psr_cmt_get_l3_monitoring_data(gc, domid,
> > +                    type, socketid, &sample2);
> > +    if (rc < 0)
> > +       return ERROR_FAIL;
> > +
> > +    if (sample2 < sample1) {
> > +         LOGE(ERROR, "event counter overflowed between two samplings");
> > +         return ERROR_FAIL;
> > +    }
> > +
> 
> What's the likelihood of counter overflows? Can we handle this more
> gracefully? Say, retry (with maximum retry cap) when counter overflows?
The likelihood is very small here. Hardware guarantees the counter will
not overflow in one second even under maximum platform bandwidth conditions.
And we only sleep 0.01 second here. 

I'd like to adopt your suggestion to retry another time once that happens.
But only one retry and it should correct the overflow.

Thanks,
Chao
> 
> > +    rc = xc_psr_cmt_get_l3_upscaling_factor(CTX->xch, &upscaling_factor);
> > +    if (rc < 0) {
> > +        LOGE(ERROR, "failed to get L3 upscaling factor");
> > +        return ERROR_FAIL;
> > +    }
> > +
> > +    *bandwidth = (sample2 - sample1) * 100 *  upscaling_factor / 1024;
> > +    return rc;
> > +}
> > +
> > +int libxl_psr_cmt_get_total_mem_bandwidth(libxl_ctx *ctx, uint32_t domid,
> > +    uint32_t socketid, uint32_t *bandwidth)
> > +{
> > +    GC_INIT(ctx);
> > +    int rc;
> > +
> > +    rc = libxl__psr_cmt_get_mem_bandwidth(gc, domid,
> > +                    XC_PSR_CMT_TOTAL_MEM_BANDWIDTH, socketid, bandwidth);
> > +    GC_FREE;
> > +    return rc;
> > +}
> > +
> > +int libxl_psr_cmt_get_local_mem_bandwidth(libxl_ctx *ctx, uint32_t domid,
> > +    uint32_t socketid, uint32_t *bandwidth)
> > +{
> > +    GC_INIT(ctx);
> > +    int rc;
> > +
> > +    rc = libxl__psr_cmt_get_mem_bandwidth(gc, domid,
> > +                    XC_PSR_CMT_LOCAL_MEM_BANDWIDTH, socketid, bandwidth);
> > +    GC_FREE;
> > +    return rc;
> > +}
> > +
> >  /*
> >   * Local variables:
> >   * mode: C
> > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > index f7fc695..8029a39 100644
> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -693,4 +693,6 @@ libxl_event = Struct("event",[
> >  
> >  libxl_psr_cmt_type = Enumeration("psr_cmt_type", [
> >      (1, "CACHE_OCCUPANCY"),
> > +    (2, "TOTAL_MEM_BANDWIDTH"),
> > +    (3, "LOCAL_MEM_BANDWIDTH"),
> >      ])
> > diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> > index f4534ec..e0435dd 100644
> > --- a/tools/libxl/xl_cmdimpl.c
> > +++ b/tools/libxl/xl_cmdimpl.c
> > @@ -7867,6 +7867,16 @@ static void 
> > psr_cmt_print_domain_l3_info(libxl_dominfo *dominfo,
> >                   socketid, &data) )
> >                  printf("%13u KB", data);
> >              break;
> > +        case LIBXL_PSR_CMT_TYPE_TOTAL_MEM_BANDWIDTH:
> > +            if ( !libxl_psr_cmt_get_total_mem_bandwidth(ctx, 
> > dominfo->domid,
> 
> Coding style.
> 
> > +                 socketid, &data) )
> > +                printf("%11u KB/s", data);
> > +            break;
> > +        case LIBXL_PSR_CMT_TYPE_LOCAL_MEM_BANDWIDTH:
> > +            if ( !libxl_psr_cmt_get_local_mem_bandwidth(ctx, 
> > dominfo->domid,
> 
> Ditto.
> 
> Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.