[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] weird hvm performance issue.




Thanks for your quick response! (mine was delayed by figuring out oprofile.)

On Mon, 29 Jan 2007, Keir Fraser wrote:
On 29/1/07 18:08, "Luke S. Crawford" <lsc@xxxxxxxxx> wrote:
The CPU usage of qemu-dm correlates with the speed of the scroll;  exiting
from
the xm console does not appear to effect the cpu usage of qemu-dm-  it
will drop to 1% after several minutes.

This kind of thing can be a pain to debug. Perhaps instrument qemu-dm and
find out interesting things like how often its select() call returns due to
an event on a file descriptor rather than due to hitting the timeout?

Okay, I have oprofile installed, and mostly figured out;

so opreport without console input:

[root@1950-2 ~]# opreport -l /usr/lib64/xen/bin/qemu-dm|head
CPU: P4 / Xeon with 2 hyper-threads, speed 3191.87 MHz (estimated)
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000
samples  %        symbol name
13193    32.7483  main_loop_wait
5787     14.3648  cpu_handle_ioreq
3873      9.6138  DMA_run
1904      4.7262  qemu_get_clock
1683      4.1776  qemu_run_timers
1333      3.3088  qemu_del_timer
985       2.4450  __handle_ioreq



with console input:
[root@1950-2 ~]# opreport -l /usr/lib64/xen/bin/qemu-dm|head
CPU: P4 / Xeon with 2 hyper-threads, speed 3191.87 MHz (estimated)
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000
samples  %        symbol name
11299    29.0904  main_loop_wait
4944     12.7288  cpu_handle_ioreq
3104      7.9916  DMA_run
1729      4.4515  cpu_physical_memory_rw
1481      3.8130  qemu_get_clock
1431      3.6843  iomem_index
1392      3.5838  qemu_run_timers


(those two look about the same; but the slow one, it took a good minute to get to 3000 samples, wheras on the 'with console input' test it took maybe 10 seconds.

hm. I need to do more reading on this oprofile in the morning (I don't have remote access to a HVM-capable system from home at this moment)


Does the slowness happen if you are using the emulated network device
at the
same time (i.e., does the event to kick back to normal behaviour have to be
a console event)?

pinging it doesn't help. (the thing that is triggering this is rsync over the network to the drive) I will re-install with a HVM that is a proper linux box (rather than a systemimager boot floppy) and attempt to re-create the problem with dd or rsync in the morning.

I guess something may have crept in when we moved to using qemu's
asynchronous block i/o code...

It is using rsync to copy several gigabytes of files, so it is certainly excersizing the block I/O

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.