[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] question about disk performance in domU




>> -----Original Message-----
>> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
>> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
>> Tim Freeman
>> Sent: Monday, November 21, 2005 12:45 PM
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: keahey@xxxxxxxxxxx; xuehai zhang
>> Subject: Re: [Xen-devel] question about disk performance in domU
>> 
>> 
>> So the "Timing buffered disk reads" show much higher 
>> results.  I see the DMA zone is larger in domU, but since 
>> this is mapped to a loopback file I'm guessing the physical 
>> disk performance should only be affected by what dom0's DMA 
>> zone is set to on the node running the domU if that is all 
>> that was going on.  But dom0 performance seems comparable to 
>> native linux. 
>> 
>> Is this huge "timing buffered disk read" difference 
>> accurate? Does the domU benefit from some other cache of the 
>> loopback file in dom0?  
 
  Yes. It seems to me that this is the effect of the file buffer
  cache on dom0. hdparm flushes the file cache on domU 
  to make sure that there is no data in the file buffer cache 
  when measuring device access times (reported as buffered disk
  reads). However when  using VBDs mapped to files, the data is
  also cached on dom0 file buffer. Therefore data is not coming
  directly from the device but from dom0 file cache. Note how the 
  amount of data that is read in 3 sec increases at each step,
  since data read in previous steps come from dom0 file cache.
 
  Regards

  Renato



>> Thanks for any insights,
>> Tim 
>> 
>> On Mon, 21 Nov 2005 09:41:08 -0600
>> xuehai zhang <hai@xxxxxxxxxxxxxxx> wrote:
>> 
>> > Hi all,
>> > 
>> > When I ran the experiments to compare an application's 
>> execution time 
>> > in both a domU (named cctest1)  and a native Linux machine (named 
>> > ccn10), I noticed the application executes faster in domU. 
>> The  host 
>> > of the domU (named ccn9) and ccn10 are two nodes of a 
>> cluster and they 
>> > have same hardware configurations. domU (cctest1) is created by 
>> > exporting loopback files from dom0 on ccn9 as VBD  backends. The 
>> > application execution logs there might be some disk I/O difference 
>> > between cctest and  ccn10, so I did some disk performance 
>> profiling 
>> > with "hdparms" on cctest1 (domU), ccn10 (native  Linux), 
>> ccn9 (dom0), 
>> > and ccn9 (native Linux). Also, I checked the "DMA" config 
>> information 
>> > from the  output of dmesg. I tried to run "hdparm -i" and 
>> "hdparm -I" 
>> > but they didn't work. Seems they didn't  work with SCSI disks. The 
>> > following are the results. Thanks in advance for your help. Best, 
>> > Xuehai
>> > 
>> > 1. cctest1 (dumU)
>> > 
>> *************************************************************
>> *********
>> > *******
>> > ********************* cctest1$ df -lh
>> > Filesystem            Size  Used Avail Use% Mounted on
>> > /dev/sda1             1.5G  1.1G  306M  78% /
>> > tmpfs                  62M  4.0K   62M   1% /dev/shm
>> > /dev/sda6             4.2G  3.6G  453M  89% /tmp
>> > /dev/sda5             938M  205M  685M  23% /var
>> > 
>> > cctest1$ dmesg | grep DMA
>> >    DMA zone: 101376 pages, LIFO batch:16
>> > 
>> > cctest1$ for i in `seq 1 10`; do hdparm -tT /dev/sda1; done
>> > 
>> >   Timing cached reads:   512 MB in  2.00 seconds = 256.00 MB/sec
>> >   Timing buffered disk reads:   44 MB in  3.00 seconds =  
>> 14.67 MB/sec
>> > 
>> >   Timing cached reads:   528 MB in  2.01 seconds = 262.69 MB/sec
>> >   Timing buffered disk reads:   84 MB in  3.08 seconds =  
>> 27.27 MB/sec
>> > 
>> >   Timing cached reads:   520 MB in  2.00 seconds = 260.00 MB/sec
>> >   Timing buffered disk reads:  120 MB in  3.06 seconds =  
>> 39.22 MB/sec
>> > 
>> >   Timing cached reads:   520 MB in  2.00 seconds = 260.00 MB/sec
>> >   Timing buffered disk reads:  150 MB in  3.06 seconds =  
>> 49.02 MB/sec
>> > 
>> >   Timing cached reads:   536 MB in  2.00 seconds = 268.00 MB/sec
>> >   Timing buffered disk reads:  178 MB in  3.17 seconds =  
>> 56.15 MB/sec
>> > 
>> >   Timing cached reads:   536 MB in  2.00 seconds = 268.00 MB/sec
>> >   Timing buffered disk reads:  204 MB in  3.08 seconds =  
>> 66.23 MB/sec
>> > 
>> >   Timing cached reads:   532 MB in  2.00 seconds = 266.00 MB/sec
>> >   Timing buffered disk reads:  228 MB in  3.13 seconds =  
>> 72.84 MB/sec
>> > 
>> >   Timing cached reads:   540 MB in  2.01 seconds = 268.66 MB/sec
>> >   Timing buffered disk reads:  248 MB in  3.04 seconds =  
>> 81.58 MB/sec
>> > 
>> >   Timing cached reads:   540 MB in  2.00 seconds = 270.00 MB/sec
>> >   Timing buffered disk reads:  266 MB in  3.06 seconds =  
>> 86.93 MB/sec
>> > 
>> >   Timing cached reads:   532 MB in  2.00 seconds = 266.00 MB/sec
>> >   Timing buffered disk reads:  282 MB in  3.06 seconds =  
>> 92.16 MB/sec
>> > 
>> > 
>> *************************************************************
>> *********
>> > *******
>> > *********************
>> > 
>> > 
>> > 2. ccn10 (native Linux)
>> > 
>> > 
>> *************************************************************
>> *********
>> > *******
>> > *********************
>> > 
>> > ccn10$ df -lh
>> > Filesystem            Size  Used Avail Use% Mounted on
>> > /dev/sda1             1.5G  1.3G  149M  90% /
>> > tmpfs                 252M     0  252M   0% /dev/shm
>> > /dev/sda6             4.2G  3.6G  358M  92% /tmp
>> > /dev/sda5             938M  706M  184M  80% /var
>> > 
>> > ccn10$ dmesg | grep DMA
>> >    DMA zone: 4096 pages, LIFO batch:1
>> > 
>> > ccn10$ for i in `seq 1 10`; do hdparm -tT /dev/sda1; done
>> > 
>> >   Timing cached reads:   516 MB in  2.00 seconds = 257.78 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.03 seconds =  
>> 20.47 MB/sec
>> > 
>> >   Timing cached reads:   524 MB in  2.01 seconds = 261.00 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.01 seconds =  
>> 20.61 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.00 seconds = 257.65 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.01 seconds =  
>> 20.61 MB/sec
>> > 
>> >   Timing cached reads:   524 MB in  2.00 seconds = 262.04 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.01 seconds =  
>> 20.61 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.00 seconds = 257.78 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.02 seconds =  
>> 20.51 MB/sec
>> > 
>> >   Timing cached reads:   524 MB in  2.00 seconds = 261.78 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.02 seconds =  
>> 20.52 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.00 seconds = 257.78 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.02 seconds =  
>> 20.51 MB/sec
>> > 
>> >   Timing cached reads:   524 MB in  2.00 seconds = 261.78 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.02 seconds =  
>> 20.50 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.00 seconds = 257.40 MB/sec
>> >   Timing buffered disk reads:   64 MB in  3.09 seconds =  
>> 20.73 MB/sec
>> > 
>> >   Timing cached reads:   524 MB in  2.01 seconds = 260.87 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.01 seconds =  
>> 20.61 MB/sec
>> > 
>> > 
>> *************************************************************
>> *********
>> > *******
>> > *********************
>> > 
>> > 
>> > 3. ccn9 (dom0)
>> > 
>> *************************************************************
>> *********
>> > *******
>> > *********************
>> > 
>> > ccn9$ df -lh
>> > Filesystem            Size  Used Avail Use% Mounted on
>> > /dev/sda1             1.5G  1.1G  306M  78% /
>> > tmpfs                  62M  4.0K   62M   1% /dev/shm
>> > /dev/sda6             4.2G  3.6G  453M  89% /tmp
>> > /dev/sda5             938M  205M  685M  23% /var
>> > 
>> > ccn9$ dmesg | grep DMA
>> >    DMA zone: 32768 pages, LIFO batch:8
>> > 
>> > ccn9$ for i in `seq 1 10`; do hdparm -tT /dev/sda1; done
>> > 
>> >   Timing cached reads:   504 MB in  2.00 seconds = 252.00 MB/sec
>> >   Timing buffered disk reads:   60 MB in  3.14 seconds =  
>> 19.11 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.00 seconds = 258.00 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.15 seconds =  
>> 19.68 MB/sec
>> > 
>> >   Timing cached reads:   512 MB in  2.00 seconds = 256.00 MB/sec
>> >   Timing buffered disk reads:   60 MB in  3.08 seconds =  
>> 19.48 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.00 seconds = 258.00 MB/sec
>> >   Timing buffered disk reads:   58 MB in  3.02 seconds =  
>> 19.21 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.01 seconds = 256.72 MB/sec
>> >   Timing buffered disk reads:   60 MB in  3.12 seconds =  
>> 19.23 MB/sec
>> > 
>> >   Timing cached reads:   520 MB in  2.00 seconds = 260.00 MB/sec
>> >   Timing buffered disk reads:   60 MB in  3.13 seconds =  
>> 19.17 MB/sec
>> > 
>> >   Timing cached reads:   520 MB in  2.01 seconds = 258.71 MB/sec
>> >   Timing buffered disk reads:   60 MB in  3.13 seconds =  
>> 19.17 MB/sec
>> > 
>> >   Timing cached reads:   520 MB in  2.01 seconds = 258.71 MB/sec
>> >   Timing buffered disk reads:   60 MB in  3.06 seconds =  
>> 19.61 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.01 seconds = 256.72 MB/sec
>> >   Timing buffered disk reads:   60 MB in  3.14 seconds =  
>> 19.11 MB/sec
>> > 
>> >   Timing cached reads:   516 MB in  2.00 seconds = 258.00 MB/sec
>> >   Timing buffered disk reads:   60 MB in  3.15 seconds =  
>> 19.05 MB/sec
>> > 
>> > 
>> *************************************************************
>> *********
>> > *******
>> > *********************
>> > 
>> > 4. ccn9 (native Linux)
>> > 
>> *************************************************************
>> *********
>> > *******
>> > ********************* ccn9$ df -lh
>> > Filesystem            Size  Used Avail Use% Mounted on
>> > /dev/sda1             1.5G  1.1G  306M  78% /
>> > tmpfs                  62M  4.0K   62M   1% /dev/shm
>> > /dev/sda6             4.2G  3.6G  453M  89% /tmp
>> > /dev/sda5             938M  205M  685M  23% /var
>> > 
>> > ccn9 $ dmesg | grep DMA
>> >    DMA zone: 4096 pages, LIFO batch:1
>> > 
>> > ccn9$ for i in `seq 1 10`; do hdparm -tT /dev/sda1; done
>> > /dev/sda1:
>> >   Timing cached reads:   492 MB in  2.01 seconds = 244.57 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.10 seconds =  
>> 20.01 MB/sec
>> > 
>> >   Timing cached reads:   484 MB in  2.01 seconds = 241.07 MB/sec
>> >   Timing buffered disk reads:   48 MB in  3.01 seconds =  
>> 15.95 MB/sec
>> > 
>> >   Timing cached reads:   484 MB in  2.00 seconds = 241.67 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.03 seconds =  
>> 20.45 MB/sec
>> > 
>> >   Timing cached reads:   484 MB in  2.01 seconds = 241.31 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.01 seconds =  
>> 20.57 MB/sec
>> > 
>> >   Timing cached reads:   480 MB in  2.01 seconds = 239.08 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.03 seconds =  
>> 20.49 MB/sec
>> > 
>> >   Timing cached reads:   488 MB in  2.01 seconds = 243.31 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.05 seconds =  
>> 20.31 MB/sec
>> > 
>> >   Timing cached reads:   484 MB in  2.01 seconds = 241.31 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.01 seconds =  
>> 20.61 MB/sec
>> > 
>> >   Timing cached reads:   484 MB in  2.00 seconds = 241.67 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.01 seconds =  
>> 20.59 MB/sec
>> > 
>> >   Timing cached reads:   488 MB in  2.01 seconds = 242.34 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.01 seconds =  
>> 20.59 MB/sec
>> > 
>> >   Timing cached reads:   484 MB in  2.01 seconds = 240.35 MB/sec
>> >   Timing buffered disk reads:   62 MB in  3.09 seconds =  
>> 20.09 MB/sec
>> > 
>> *************************************************************
>> *********
>> > *******
>> > *********************
>> > 
>> > _______________________________________________
>> > Xen-devel mailing list
>> > Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
>> > 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
>> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.