[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] çå: çå: about xenalyze



$xenalyze --summary trace_file_discuz.bin 
Using VMX hardware-assisted virtualization.
scan_for_new_pcpu: Activating pcpu 10 at offset 0
Creating vcpu 10 for dom 32768
scan_for_new_pcpu: Activating pcpu 21 at offset 42196
Creating vcpu 21 for dom 32768
scan_for_new_pcpu: Activating pcpu 22 at offset 43516
Creating vcpu 22 for dom 32768
scan_for_new_pcpu: Activating pcpu 24 at offset 44664
Creating vcpu 24 for dom 32768
init_pcpus: through first trace write, done for now.
Detected off-by-one bug; relaxing expectations
hvm_generic_postprocess: Strange, exit 2c(APIC_ACCESS) missing a handler
scan_for_new_pcpu: Activating pcpu 7 at offset 3360952
Creating vcpu 7 for dom 32768
process_cpu_change: Activating pcpu 11 at offset 3396528
Creating vcpu 11 for dom 32768
process_cpu_change: Activating pcpu 16 at offset 4685392
Creating vcpu 16 for dom 32768
process_cpu_change: Activating pcpu 25 at offset 4767196
Creating vcpu 25 for dom 32768
process_cpu_change: Activating pcpu 23 at offset 8820852
Creating vcpu 23 for dom 32768
process_cpu_change: Activating pcpu 26 at offset 10467560
Creating vcpu 26 for dom 32768
process_cpu_change: Activating pcpu 27 at offset 13377152
Creating vcpu 27 for dom 32768
process_cpu_change: Activating pcpu 0 at offset 14202228
Creating vcpu 0 for dom 32768
process_cpu_change: Activating pcpu 12 at offset 14204024
Creating vcpu 12 for dom 32768
process_cpu_change: Activating pcpu 17 at offset 14762404
Creating vcpu 17 for dom 32768
process_cpu_change: Activating pcpu 28 at offset 14771148
Creating vcpu 28 for dom 32768
scan_for_new_pcpu: Activating pcpu 1 at offset 14776680
Creating vcpu 1 for dom 32768
process_cpu_change: Activating pcpu 4 at offset 14779148
Creating vcpu 4 for dom 32768
process_cpu_change: Activating pcpu 13 at offset 14825668
Creating vcpu 13 for dom 32768
process_cpu_change: Activating pcpu 18 at offset 14938016
Creating vcpu 18 for dom 32768
process_cpu_change: Activating pcpu 20 at offset 14941080
Creating vcpu 20 for dom 32768
process_cpu_change: Activating pcpu 29 at offset 14954728
Creating vcpu 29 for dom 32768
read_record: read returned zero, deactivating pcpu 10
deactivate_pcpu: setting d32768v10 to state LOST
hvm_generic_postprocess: Strange, exit 0(EXCEPTION_NMI) missing a handler
hvm_generic_postprocess: HVM evt 0 in 2c and 0!
read_record: read returned zero, deactivating pcpu 7
deactivate_pcpu: setting d32768v7 to state LOST
read_record: read returned zero, deactivating pcpu 11
deactivate_pcpu: setting d32768v11 to state LOST
read_record: read returned zero, deactivating pcpu 16
deactivate_pcpu: setting d32768v16 to state LOST
read_record: read returned zero, deactivating pcpu 21
deactivate_pcpu: setting d32768v21 to state LOST
read_record: read returned zero, deactivating pcpu 22
deactivate_pcpu: setting d32768v22 to state LOST
read_record: read returned zero, deactivating pcpu 26
deactivate_pcpu: setting d32768v26 to state LOST
read_record: read returned zero, deactivating pcpu 27
deactivate_pcpu: setting d32768v27 to state LOST
read_record: read returned zero, deactivating pcpu 0
deactivate_pcpu: setting d32768v0 to state LOST
read_record: read returned zero, deactivating pcpu 12
deactivate_pcpu: setting d32768v12 to state LOST
read_record: read returned zero, deactivating pcpu 17
deactivate_pcpu: setting d32768v17 to state LOST
read_record: read returned zero, deactivating pcpu 28
deactivate_pcpu: setting d32768v28 to state LOST
read_record: read returned zero, deactivating pcpu 1
deactivate_pcpu: setting d32768v1 to state LOST
read_record: read returned zero, deactivating pcpu 4
deactivate_pcpu: setting d32768v4 to state LOST
read_record: read returned zero, deactivating pcpu 13
deactivate_pcpu: setting d32768v13 to state LOST
read_record: read returned zero, deactivating pcpu 18
deactivate_pcpu: setting d32768v18 to state LOST
read_record: read returned zero, deactivating pcpu 20
deactivate_pcpu: setting d32768v20 to state LOST
read_record: read returned zero, deactivating pcpu 25
deactivate_pcpu: setting d32768v25 to state LOST
read_record: read returned zero, deactivating pcpu 23
deactivate_pcpu: setting d32768v23 to state LOST
read_record: read returned zero, deactivating pcpu 24
deactivate_pcpu: setting d32768v24 to state LOST
read_record: read returned zero, deactivating pcpu 29
deactivate_pcpu: setting d32768v29 to state LOST
deactivate_pcpu: Setting max_active_pcpu to -1
Total time: 32.49 seconds (using cpu speed 2.40 GHz)
--- Log volume summary ---
 - cpu 0 -
 hvm   :       1784
 +-vmentry:        448
 +-vmexit :        784
 +-handler:        552
 - cpu 1 -
 hvm   :       2456
 +-vmentry:        688
 +-vmexit :       1204
 +-handler:        564
 - cpu 4 -
 hvm   :      31220
 +-vmentry:       8544
 +-vmexit :      14952
 +-handler:       7724
 - cpu 7 -
 hvm   :      22708
 +-vmentry:       6240
 +-vmexit :      10920
 +-handler:       5548
 - cpu 10 -
 gen   :        336
 hvm   :    2925480
 +-vmentry:     746048
 +-vmexit :    1305612
 +-handler:     873820
 - cpu 11 -
 gen   :        104
 hvm   :    1159676
 +-vmentry:     297584
 +-vmexit :     520772
 +-handler:     341320
 - cpu 12 -
 gen   :         48
 hvm   :     453336
 +-vmentry:     116240
 +-vmexit :     203420
 +-handler:     133676
 - cpu 13 -
 gen   :          4
 hvm   :      99404
 +-vmentry:      24784
 +-vmexit :      43372
 +-handler:      31248
 - cpu 16 -
 hvm   :       7092
 +-vmentry:       1728
 +-vmexit :       3024
 +-handler:       2340
 - cpu 17 -
 hvm   :       1536
 +-vmentry:        432
 +-vmexit :        756
 +-handler:        348
 - cpu 18 -
 hvm   :       3052
 +-vmentry:        752
 +-vmexit :       1316
 +-handler:        984
 - cpu 20 -
 hvm   :       5652
 +-vmentry:       1344
 +-vmexit :       2352
 +-handler:       1956
 - cpu 21 -
 gen   :         44
 hvm   :     439156
 +-vmentry:     122656
 +-vmexit :     214648
 +-handler:     101852
 - cpu 22 -
 gen   :        108
 hvm   :     634364
 +-vmentry:     178096
 +-vmexit :     311668
 +-handler:     144600
 - cpu 23 -
 gen   :         56
 hvm   :     489208
 +-vmentry:     136288
 +-vmexit :     238504
 +-handler:     114416
 - cpu 24 -
 gen   :        396
 hvm   :    3937836
 +-vmentry:    1010016
 +-vmexit :    1767528
 +-handler:    1160292
 - cpu 25 -
 gen   :        148
 hvm   :    1747160
 +-vmentry:     453488
 +-vmexit :     793604
 +-handler:     500068
 - cpu 26 -
 gen   :        220
 hvm   :    2588608
 +-vmentry:     661024
 +-vmexit :    1156792
 +-handler:     770792
 - cpu 27 -
 gen   :        112
 hvm   :     745320
 +-vmentry:     189632
 +-vmexit :     331856
 +-handler:     223832
 - cpu 28 -
 hvm   :      13316
 +-vmentry:       3712
 +-vmexit :       6496
 +-handler:       3108
 - cpu 29 -
 gen   :        348
 hvm   :    3038412
 +-vmentry:     771680
 +-vmexit :    1350412
 +-handler:     916320


This is the full output. 

-----éäåä-----
åää: George Dunlap [mailto:george.dunlap@xxxxxxxxxxxxx] 
åéæé: 2015å5æ8æ 1:58
æää: èéä(èå)
æé: xen-devel@xxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx
äé: Re: çå: about xenalyze

On 05/07/2015 04:30 AM, èéä(èå) wrote:
>  Hi, George
> 
> I deploy a Discuz application in a VM and use jmeter to stress the 
> application.  Meanwhile I use xentrace to trace.
> #xentrace -D -e 0x0008f000 -T 30 trace_file_discuz.bin
> 
> Then I use xenalyze to analyze
> #xenalyze --summary trace_file_discuz.bin It tells me as follows:
> 
>  --- Log volume summary ---
>   - cpu 0 -
>   hvm   :       1784
>   +-vmentry:        448
>   +-vmexit :        784
>   +-handler:        552
> 
> [...]
>  
>  
>  It seems different from what you show in your presentation ãXenalyze Finding 
> meaning in the chaosãã What's wrong with it ?
>  I would like to see the reason, counts and cpu time for handler of each 
> VMEXIT.  I would also like to see the wait time, blocked time , cpu usage of 
> a domain.  What should I do?

The log volume summary is just the first part of the output; normally the 
summary of stuff by domain is below that.

Can you please include the full output?

Thanks,
 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.