[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Benchmarking Xen (results and questions)


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: <David_Wolinsky@xxxxxxxx>
  • Date: Wed, 3 Aug 2005 18:21:15 -0500
  • Delivery-date: Wed, 03 Aug 2005 23:19:40 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcWYggtHtgtFhzqiRIKy2t/epLpSpA==
  • Thread-topic: Benchmarking Xen (results and questions)

Hi all,

Here are some benchmarks that I've done using Xen.

However, before I get started, let me explain some of configuration details…

Xen Version     SPECjbb
        WebBench       
Linux Distribution      Debian 3.1     
HT      disabled       
Linux Kernel    2.6.12.2       
Host Patch      CK3s   

Here are the initial benchmarks

        SPECJBB WebBench                               
        1 Thread        1 Client        2 Clients       4 Clients       8 Clients      
        BOPS    TPS     TPS     TPS     TPS    
Host    32403.5 213.45  416.86  814.62  1523.78
1 VM    32057   205.4   380.91  569.24  733.8  
2 VM    24909.25        NA      399.29  695.1   896.04 
4 VM    17815.75        NA      NA      742.78  950.63 
8 VM    10216.25        NA      NA      NA      1002.81

(and some more notes…. BOPS - business operations per second, TPS - transactions per second…
SPECjbb tests CPU and Memory
WebBench (the way we configured it) tests Network I/O and Disk I/O

Values = AVG * VM count        
Domain configurations          
        1 VM - 1660 MB - SPECJBB 1500MB
        2 VM - 1280 MB - SPECJBB - 1024MB      
        4 VM - 640 MB - SPECJBB - 512 MB       
        8 VM - 320 MB - SPECJBB  - 256 MB      

Seeing how the SPECjbb numbers declined so bizarrely, I did some scheduling tests and found this out…

Test1:  Examine Xen's scheduling to determine if context switching is causing the overhead                                     
                Period  Slice   BOPs   
Modified        8 VM    1 ms    125 us  6858   
        8 VM    10 ms   1.25 ms 14287  
        8 VM    100 ms  12.5 ms 18912  
        8 VM    1 Sec   .125 Sec        20695  
        8 VM    2 Sec   .25 Sec 21072  
        8 VM    10 Sec  1.25 Sec        21797  
        8 VM    100 Sec 12.5 Sec        11402  

I later learned that there was a period limit of 4 seconds, thus invalidating 10 and 100 seconds.  However, this graph suggests that Xen needs some load and scheduling balancing done.

I also did a memory test to determine if that could be the issue… I made a custom stream to run for a 2 minute period… and got these numbers

                Copy    Scale   Add     Triad  
Host            3266.4  3215.47 3012.28 3021.79
Modified        1 VM    3262.34 3220.34 3016.13 3025.28

So we can see memory is not the issue…

Now onto WebBench - After comparing the WebBench to the SPECjbb results, we get something interesting… NUMBERS increase as we increase the virtual machien count… So I would really like some idea on why this is.  My understanding is this…  When using the shared memory network drivers, there must be a local buffer, and when the buffer fills up, it puts the remaining into a global buffer, and when that fills up it puts it into a disk buffer?  (These are all assumptions please correct me…)  If that is the case is there an easy way to increase the local buffer to attempt to get better numbers?  I also am looking into doing some tests that deal with multiple small transactions and 1 large transactions…  I ran these all against a physical and image backed disk.  Please any suggestions.

(Note… I was running this on a 1 gigabit switch with only webbench running)…

If there are any questions, I would be glad to respond.

Thanks,
David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.