[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] DomU vs Dom0 performance.



Please find my response inline.

Thank you,
Sushrut.

On 1 October 2013 10:05, Felipe Franciosi <felipe.franciosi@xxxxxxxxxx> wrote:

1) Can you paste your entire config file here?

This is just for clarification on the HVM bit.

Your “disk” config suggests you are using the PV protocol for storage (blkback).

kernel = "hvmloader"
builder='hvm'
memory = 4096
name = "ArchHVM"
vcpus=8
disk = [ 'phy:/dev/sda5,hda,w', 'file:/root/dev/iso/archlinux.iso,hdc:cdrom,r' ]
device_model = 'qemu-dm'
boot="c"
sdl=0
xen_platform_pci=1
opengl=0
vnc=0
vncpasswd=''
nographic=1
stdvga=0
serial='pty'
 

2) Also, can you run “uname -a" in both dom0 and domU and paste it here as well?

     Based on the syscall latencies you presented, it sounds like one domain may be 32bit and the other 64bit.

 

kernel information on dom0 is :
Linux localhost 3.5.0-IDD #5 SMP PREEMPT Fri Sep 6 23:31:56 UTC 2013 x86_64 GNU/Linux

on domU is :
Linux domu 3.5.0-IDD-12913 #2 SMP PREEMPT Sun Dec 9 17:54:30 EST 2012 x86_64 GNU/Linux

3) You are doing this:

 

> <snip>
> for i in `ls test_file.*`
> do
>    sudo dd if=./$i of=/dev/zero
> done
> </snip>

My bad. I have changed it to /dev/null. 

I don’t know what you intended with this, but you can’t output to /dev/zero (you can read from /dev/zero, but you can only output to /dev/null).

If your “img” is 5G and your guest has 4G of RAM, you will not consistently buffer the entire image.

 

Even though I am using a 5G of img, read operations executed are of size 1G only. Also lm_benchmark doesn't involve any read/writes to this ".img", still the results I am getting are better on domU when measured with lm micro benchmarks. 

You are then doing buffered IO (note that some of your requests are completing in 10us). That can only happen if you are reading from memory and not from disk.

Even though a single request is completing in 10us, total time required to complete all requests (5000000) is 17 & 13 seconds for dom0 and domU respectively. 

(I forgot to mention that I have a SSD installed on this machine) 

If you want to consistently compare the performance between two domains, you should always bypass the VM’s cache with O_DIRECT.

But looking at results of lat_syscall and bw_mem microbenchmarks, it shows that syscalls are executed faster in domU and memory bandwidth is more in domU.

 

Cheers,
Felipe

 

From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of sushrut shirole
Sent: 30 September 2013 16:47
To: Konrad Rzeszutek Wilk
Cc: xen-devel@xxxxxxxxxxxxx
Subject: Re: [Xen-devel] DomU vs Dom0 performance.

 

Its a HVM guest.

 

On 30 September 2013 14:36, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:

On Sun, Sep 29, 2013 at 07:22:14PM -0400, sushrut shirole wrote:
> Hi,
>
> I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
> into an issue where domU
> performed better than dom0.  So I ran few experiments to check if it is
> just diskIO performance.
>
> I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
> i7 Q720 machine. I have also installed
> archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
> with 8 vcpus. I have alloted both dom0
> and domu 4096M ram.

What kind of guest is it ? PV or HVM?


>
> I performed following experiments to compare the performance of domU vs
> dom0.
>
> experiment 1]
>
> 1. Created a file.img of 5G
> 2. Mounted the file with ext2 filesystem.
> 3. Ran sysbench with following command.
>
> sysbench --num-threads=8 --test=fileio --file-total-size=1G
> --max-requests=1000000 prepare
>
> 4. Read files into memory
>
> script to read files
>
> <snip>
> for i in `ls test_file.*`
> do
>    sudo dd if=./$i of=/dev/zero
> done
> </snip>
>
> 5. Ran sysbench.
>
> sysbench --num-threads=8 --test=fileio --file-total-size=1G
> --max-requests=5000000 --file-test-mode=rndrd run
>
> the output i got on dom0 is
>
> <output>
> Number of threads: 8
>
> Extra file open flags: 0
> 128 files, 8Mb each
> 1Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 5000000
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random read test
>
> Operations performed:  5130322 Read, 0 Write, 0 Other = 5130322 Total
> Read 78.283Gb  Written 0b  Total transferred 78.283Gb  (4.3971Gb/sec)

> *288165.68 Requests/sec executed*

>
> Test execution summary:
>     total time:                          17.8034s
>     total number of events:              5130322
>     total time taken by event execution: 125.3102
>     per-request statistics:
>          min:                                  0.01ms
>          avg:                                  0.02ms
>          max:                                 55.55ms
>          approx.  95 percentile:               0.02ms
>
> Threads fairness:
>     events (avg/stddev):           641290.2500/10057.89
>     execution time (avg/stddev):   15.6638/0.02
> </output>
>
> 6. Performed same experiment on domU and result I got is
>
> <output>
> Number of threads: 8
>
> Extra file open flags: 0
> 128 files, 8Mb each
> 1Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 5000000
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random read test
>
> Operations performed:  5221490 Read, 0 Write, 0 Other = 5221490 Total
> Read 79.674Gb  Written 0b  Total transferred 79.674Gb  (5.9889Gb/sec)

> *392489.34 Requests/sec executed*

>
> Test execution summary:
>     total time:                          13.3035s
>     total number of events:              5221490
>     total time taken by event execution: 98.7121
>     per-request statistics:
>          min:                                  0.01ms
>          avg:                                  0.02ms
>          max:                                 49.75ms
>          approx.  95 percentile:               0.02ms
>
> Threads fairness:
>     events (avg/stddev):           652686.2500/1494.93
>     execution time (avg/stddev):   12.3390/0.02
>
> </output>
>
> I was expecting dom0 to performa better than domU, so to debug more into it
> I ram lm_bench microbenchmarks.
>
> Experiment 2] bw_mem benchmark
>
> 1. ./bw_mem 1000m wr
>
> dom0 output:
>
> 1048.58 3640.60
>
> domU output:
>
> 1048.58 4719.32
>
> 2. ./bw_mem 1000m rd
>
> dom0 output:
> 1048.58 5780.56
>
> domU output:
>
> 1048.58 6258.32
>
>
> Experiment 3] lat_syscall benchmark
>
> 1.  ./lat_syscall write
>
> dom0 output:
> Simple write: 1.9659 microseconds
>
> domU output :
> Simple write: 0.4256 microseconds
>
> 2. ./lat_syscall read
>
> dom0 output:
> Simple read: 1.9399 microseconds
>
> domU output :
> Simple read: 0.3764 microseconds
>
> 3. ./lat_syscall stat
>
> dom0 output:
> Simple stat:3.9667 microseconds
>
> domU output :
> Simple stat: 1.2711 microseconds
>
> I am not able to understand why domU has performed better than domU, when
> obvious guess is that dom0
> should perform better than domU. I would really appreciate an help if
> anyone knows the reason behind this
> issue.
>
> Thank you,
> Sushrut.

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.