[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 3/6] sysctl: extend XEN_SYSCTL_getcpuinfo interface





On 26.07.19 15:15, Dario Faggioli wrote:
Yep, I think being able to know time spent running guests could be
useful.

Well, my intention was to see hypervisor run and true idle time.

With this full series I see the distinct difference in xentop depending on the 
type of load in domains:

On my regular system (HW less Dom0, Linux with UI aka DomD, Android with PV 
drivers aka DomA), I see following:

Idle system:

xentop - 10:10:42   Xen 4.13-unstable
3 domains: 1 running, 2 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
%CPU(s):    7.0 gu,    2.6 hy,  390.4 id
Mem: 8257536k total, 8257536k used, 99020k free    CPUs: 4 @ 8MHz
      NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) 
VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  
VBD_WSECT SSID
      DomA --b---         76    3.3    6258456   75.8    6259712      75.8     
4    0        0        0    0        0        0        0          0          0  
  0
  Domain-0 -----r         14    1.0     262144    3.2   no limit       n/a     
4    0        0        0    0        0        0        0          0          0  
  0
      DomD --b---        111    2.8    1181972   14.3    1246208      15.1     
4    0        0        0    0        0        0        0          0          0  
  0


System with CPU burners in all domains:

xentop - 10:12:19   Xen 4.13-unstable
3 domains: 3 running, 0 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
%CPU(s):  389.1 gu,   10.9 hy,    0.0 id
Mem: 8257536k total, 8257536k used, 99020k free    CPUs: 4 @ 8MHz
      NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) 
VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  
VBD_WSECT SSID
      DomA -----r        115  129.7    6258456   75.8    6259712      75.8     
4    0        0        0    0        0        0        0          0          0  
  0
  Domain-0 -----r        120  129.8     262144    3.2   no limit       n/a     
4    0        0        0    0        0        0        0          0          0  
  0
      DomD -----r        163  129.6    1181972   14.3    1246208      15.1     
4    0        0        0    0        0        0        0          0          0  
  0


System with GPU load run both in DomD and DomA:

xentop - 10:14:26   Xen 4.13-unstable
3 domains: 2 running, 1 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
%CPU(s):  165.7 gu,   51.4 hy,  182.9 id
Mem: 8257536k total, 8257536k used, 99020k free    CPUs: 4 @ 8MHz
      NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) 
VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  
VBD_WSECT SSID
      DomA --b---        250   60.8    6258456   75.8    6259712      75.8     
4    0        0        0    0        0        0        0          0          0  
  0
  Domain-0 -----r        159    2.1     262144    3.2   no limit       n/a     
4    0        0        0    0        0        0        0          0          0  
  0
      DomD -----r        275  102.7    1181972   14.3    1246208      15.1     
4    0        0        0    0        0        0        0          0          0  
  0


You can see that rise of CPU used by hypervisor itself in high IRQ use-case 
(GPU load).

I confirm what I said about patch 1: idle time being the time idle_vcpu
spent in RUNSTATE_blocked, and hypervisor time being the time idle_vcpu
spent in RUNSTATE_running sounds quite confusing to me.

As I said before, think of idle_vcpu as hypervisor_vcpu ;)

--
Sincerely,
Andrii Anisov.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.