[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics.


  • To: "jgross@xxxxxxxx" <jgross@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Fri, 12 Jun 2020 11:44:44 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=02Gz/xmHoxfc8PueJfWRwnOwqibZCGk9Z1rCGf5aDaM=; b=cd1MSWtPT76R9MZFtWV4xEwXDzaPUQclJn2WnB5QakpEVfaGTZyoOKbQpYdEhhnb8OzG724UMkiZ+flt5Rry5LxN8N+F/zQRL3VaCFkFdHtrkdaddHYaddcxpN9bW2pLntezrDLY8IKckdHxL97DmvDZUyV/BrfLzSeSLN1tie8vSSrceQRkkWLIjc6zAgfJo3A2teL7NSHT//dMMSOZ/hXhQssXTX+B6Pr5nQ74tq7HHbSekcFqYiMfe82B/CLtUjQb7AyPTxgsZW3vlEmrYyXxMA7/dEUV3Xi6KSrOOmn4weiOJodpQf1xnDsFZfRUSWqeUxc0jxHzdyV7CgTWiw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZVWvr94iECSrBgP1XKVDtfN2VXY5QNtKHl6HqO6QYn0sYsaMjR7LUx0daRhJZbHLvX3xTJebWAfb2VbRtakhlA1+VlHaFmL5rPgT2KqQz7onL0vpb5EMaEEY6U9y7LE58YZrm+yBHeSVJk2TaR8serWm4Pz0brT9j4eMJPZ4BM8+6uOmxCWmfjgPITwVAdDfLmQpxyfOBTFODiVeXM8klTWxnerv99x5FQF4ir9g/SkXjEvPZk9/FqtqVTWFvX73eDVUTPzssodXznTDKsC0SxrRQB21SeM8n16quqkzGWkkYRBdEkItwXQHrHBgxPSjDJqIOQgdQW9orVpWCLhkvQ==
  • Authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
  • Cc: "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, "wl@xxxxxxx" <wl@xxxxxxx>, "andrew.cooper3@xxxxxxxxxx" <andrew.cooper3@xxxxxxxxxx>, "ian.jackson@xxxxxxxxxxxxx" <ian.jackson@xxxxxxxxxxxxx>, "george.dunlap@xxxxxxxxxx" <george.dunlap@xxxxxxxxxx>, "dfaggioli@xxxxxxxx" <dfaggioli@xxxxxxxx>, "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>
  • Delivery-date: Fri, 12 Jun 2020 11:44:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHWQE+Tib/gTK49mk6bUudd1VvXZqjUa0CAgABx3AA=
  • Thread-topic: [RFC PATCH v1 4/6] xentop: collect IRQ and HYP time statistics.

On Fri, 2020-06-12 at 06:57 +0200, Jürgen Groß wrote:
> On 12.06.20 02:22, Volodymyr Babchuk wrote:
> > As scheduler code now collects time spent in IRQ handlers and in
> > do_softirq(), we can present those values to userspace tools like
> > xentop, so system administrator can see how system behaves.
> > 
> > We are updating counters only in sched_get_time_correction() function
> > to minimize number of taken spinlocks. As atomic_t is 32 bit wide, it
> > is not enough to store time with nanosecond precision. So we need to
> > use 64 bit variables and protect them with spinlock.
> > 
> > Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>
> > ---
> >   xen/common/sched/core.c     | 17 +++++++++++++++++
> >   xen/common/sysctl.c         |  1 +
> >   xen/include/public/sysctl.h |  4 +++-
> >   xen/include/xen/sched.h     |  2 ++
> >   4 files changed, 23 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> > index a7294ff5c3..ee6b1d9161 100644
> > --- a/xen/common/sched/core.c
> > +++ b/xen/common/sched/core.c
> > @@ -95,6 +95,10 @@ static struct scheduler __read_mostly ops;
> >   
> >   static bool scheduler_active;
> >   
> > +static DEFINE_SPINLOCK(sched_stat_lock);
> > +s_time_t sched_stat_irq_time;
> > +s_time_t sched_stat_hyp_time;
> > +
> >   static void sched_set_affinity(
> >       struct sched_unit *unit, const cpumask_t *hard, const cpumask_t 
> > *soft);
> >   
> > @@ -994,9 +998,22 @@ s_time_t sched_get_time_correction(struct sched_unit 
> > *u)
> >               break;
> >       }
> >   
> > +    spin_lock_irqsave(&sched_stat_lock, flags);
> > +    sched_stat_irq_time += irq;
> > +    sched_stat_hyp_time += hyp;
> > +    spin_unlock_irqrestore(&sched_stat_lock, flags);
> 
> Please don't use a lock. Just use add_sized() instead which will add
> atomically.

Looks like arm does not support 64 bit variables.

Julien, I believe, this is armv7 limitation? Should armv8 work with 64-
bit atomics?

> > +
> >       return irq + hyp;
> >   }
> >   
> > +void sched_get_time_stats(uint64_t *irq_time, uint64_t *hyp_time)
> > +{
> > +    unsigned long flags;
> > +
> > +    spin_lock_irqsave(&sched_stat_lock, flags);
> > +    *irq_time = sched_stat_irq_time;
> > +    *hyp_time = sched_stat_hyp_time;
> > +    spin_unlock_irqrestore(&sched_stat_lock, flags);
> 
> read_atomic() will do the job without lock.

Yes, I really want to use atomics there. Just need to clarify 64 bit
support on ARM.

> >   }
> >   
> >   /*
> > diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> > index 1c6a817476..00683bc93f 100644
> > --- a/xen/common/sysctl.c
> > +++ b/xen/common/sysctl.c
> > @@ -270,6 +270,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) 
> > u_sysctl)
> >           pi->scrub_pages = 0;
> >           pi->cpu_khz = cpu_khz;
> >           pi->max_mfn = get_upper_mfn_bound();
> > +        sched_get_time_stats(&pi->irq_time, &pi->hyp_time);
> >           arch_do_physinfo(pi);
> >           if ( iommu_enabled )
> >           {
> > diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> > index 3a08c512e8..f320144d40 100644
> > --- a/xen/include/public/sysctl.h
> > +++ b/xen/include/public/sysctl.h
> > @@ -35,7 +35,7 @@
> >   #include "domctl.h"
> >   #include "physdev.h"
> >   
> > -#define XEN_SYSCTL_INTERFACE_VERSION 0x00000013
> > +#define XEN_SYSCTL_INTERFACE_VERSION 0x00000014
> >   
> >   /*
> >    * Read console content from Xen buffer ring.
> > @@ -118,6 +118,8 @@ struct xen_sysctl_physinfo {
> >       uint64_aligned_t scrub_pages;
> >       uint64_aligned_t outstanding_pages;
> >       uint64_aligned_t max_mfn; /* Largest possible MFN on this host */
> > +    uint64_aligned_t irq_time;
> > +    uint64_aligned_t hyp_time;
> 
> Would hypfs work, too? This would avoid the need for extending another
> hypercall.

Good point. I'll take a look at this from toolstack side. I didn't see
any hypfs calls in the xentop. But this is a good time to begin using
it.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.