[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Testing status of fully virtualized guests (Intel VT) on 32bit XEN unstable



* NEW: My 32bit XEN builds are still not PAE enabled.  Chasing done
  why this is.  For now I stopped running the 32b SMP Guests.

Test Configuration:
Dell PowerEdge 430, Dual Core, 2GB, 3 SATA (Intel VT)
32bit XEN PAE Hypervisor on a RHEL4U2 32bit root  (/dev/sda)
      dom0_mem=256M (required to boot domUs)
32bit fully virtualized (HVM) guest RHEL4U2 256MB (/dev/sdb)
      pae=0, acpi=1, apic=1
      kernargs clock=pit
32bit fully virtualized (HVM) guest RHEL3U6 256MB (/dev/sdc)
      pae=0, acpi=1, apic=1
      kernargs clock=pit

Boot Tests:
Boot a fully virtualized (HVM) guest to the login prompt
Results are marked Pass|Fail where (n) points to a failure description

Regression Tests:
851 tests (850 ltp tests and one 30 minute user load test)
Tests are marked #Pass/#Fail where (n) points to a failure description


XEN 32bit PAE 2 CPU Hypervisor (booted smp):
 ----------------------------------------------------------------------
| XEN      | Guest Kernel (SMP kernels booted with 2 CPU)              |
| Changeset|-----------------------------------------------------------|
|          | RHEL4 UP     | RHEL4 SMP    | RHEL3 UP     | RHEL3 SMP    |
|          |--------------|--------------|--------------|--------------|
|          | Boot | Test  | Boot | Test  | Boot | Test  | Boot | Test  |
|----------|------|-------|------|-------|------|-------|------|-------|
| 9960     | Pass | 850/1 |      |       | Pass |Running|      |       |
|          |      | (2)   |      |       |      | (2,4) |      |       |
|----------|------|-------|------|-------|------|-------|------|-------|
| 9925     | Pass | 850/1 | Fail |       | Pass | 850/1 | Fail |       |
|          |      | (2)   | (1)  |       |      | (2,4) | (1)  |       |
|----------|------|-------|------|-------|------|-------|------|-------|
| 9920     | Pass | 850/1 | Fail |       | Pass | 850/1 | Fail |       |
|          |      | (2)   | (1)  |       |      | (2,4) | (1)  |       |
|----------|------|-------|------|-------|------|-------|------|-------|
| 9913     | Fail |       | Fail |       | Fail |       | Fail |       |
|          | (3)  |       | (3)  |       | (3)  |       | (3)  |       |
|----------|------|-------|------|-------|------|-------|------|-------|
| 9903     | Fail |       | Fail |       | Fail |       | Fail |       |
|          | (3)  |       | (3)  |       | (3)  |       | (3)  |       |
 ----------------------------------------------------------------------

Failures:
1. 32 bit SMP guests hang on boot
   "Uncompressing Linux... Ok, booting the kernel."
2. 32bit UP guest fail ltp gettimeofday02
   "Time is going backwards"
3. [Fixed in 9920] Build broken:
   cc1: warnings being treated as errors
   mm.c: In function subarch_init_memory:
   mm.c:163: warning: format %ld expects type long int,
   but argument 2 has type unsigned int
   mm.c:163: warning: format %ld expects type long int,
   but argument 3 has type unsigned int
4. The RHEL3 UP tests run OK however they take 12 hours
   to complete where on RHEL4 UP the run for ~4 hours.
   There seems to be some performance issue running a
   2.4 kernel fully virtualized guest. One test in
   particular ltp's aso-stress025 takes 4 hour to run
   on RHEL3U6-32b while it take 4 minutes on RHEL4U2-32b.
File: regression.1

Time            Level   Message
05:00:50        INFO    Reporting status: 'Test Running' for test: 
ltp_gettimeofday02
05:00:52        INFO    Preparing to run test 'ltp_gettimeofday02' using 
profile: /qa/conductor/profiles/ltp/syscalls/gettimeofday02.xml
05:00:52        INFO    Starting test 'ltp_gettimeofday02' using profile: 
/qa/conductor/profiles/ltp/syscalls/gettimeofday02.xml
05:00:52        INFO    Dispatching operation: RemoteShell
05:00:52        FINE    Client sequencer got message requesting the start of a 
new test: ltp_gettimeofday02
05:00:52        FINER   Client sequencer sent message of type: 4 with seq num: 
1 of size: 289 bytes
05:00:52        FINER   Client sequencer handling new operation from control 
sequencer
05:00:52        FINE    Client sequencer looking for class: 
com.katana.conductor.operations.RemoteShell
05:00:52        INFO    Operation RemoteShell running
05:00:52        FINE    Client sequencer was told that an operation is now 
running
05:00:52        INFO    RemoteShell: target node(s) = vs177
05:00:52        INFO    ssh: /usr/bin/ssh root@vs177 cd 
/qa/conductor/tests/ltp/testcases/bin; gettimeofday02
05:00:52        FINE    ssh: waiting for command to finish
05:00:53        INFO       ssh: gettimeofday02 0 INFO : checking if 
gettimeofday is monotonous, takes 30s
05:00:53        INFO       ssh: gettimeofday02 1 FAIL : Time is going backwards 
(old 1145696453.61428 vs new 1145696453.60660!
05:00:53        FINE    executeShellCmd(ssh): exit value is 1
05:00:53        SEVERE  RemoteShell: command failed with error = 1
05:00:53        SEVERE  Operation RemoteShell failed
05:00:53        SEVERE  Reporting status: 'Test Failed' for test: 
ltp_gettimeofday02
05:00:53        FINE    Client sequencer detected operation completed with 
status of: Fail
05:00:53        FINER   Client sequencer sent message of type: 5 with seq num: 
2 of size: 429 bytes
05:00:53        SEVERE  Crash Collection disabled for queue : RHEL4U2-32b-XEN
05:00:53        INFO    Cleaning up after test
Queue: RHEL3U6-32b-XEN32
User:  QA
Hypervisor Build: 20060505_000 Version: main Config: Release Path: 
/repository/trees/main/20060505_000/xen32/xen-unstable/dist Dist:Xen32

Completion status:
  test run completed

Start time: May 4, 2006 3:57:19 PM
End time:   May 5, 2006 4:11:28 AM
Elapsed time: 12 hours 14 minutes 9 seconds

Tests/Config changes in queue: 851
Tests processed:  851
Tests passed:     850
Tests failed:     1
Tests aborted:    0
Forced reboots:   0


aio-stress025 performance issue:

RHEL4U2-32b native hardware:               6 seconds
RHEL4U2-32b domU guest:         3 minutes 31 seconds
RHEL3U6-32b domU guest:       173 minutes  1 second

Here are the timed runs:

dom0 run on native hardware:
----------------------------
[root@tst122 ~]# time ./aio-stress -I500 -o3 -O -r512 -t8 /test/aiodio/junkfile 
/test/aiodio/file2 /test/aiodio/file7 /test/aiodio/file8 /test/aiodio/file3 
/test/aiodio/file4 /test/aiodio/file5 /test/aiodio/file6
adding stage random read
starting with random read
file size 1024MB, record size 512KB, depth 64, ios per iteration 8
max io_submit 64, buffer alignment set to 4KB
threads 8 files 8 contexts 1 context offset 2MB verification off
Running multi thread version num_threads:8
random read on /test/aiodio/file3 (182109.19 MB/s) 1024.00 MB in 0.01s
random read on /test/aiodio/file6 (175975.25 MB/s) 1024.00 MB in 0.01s
random read on /test/aiodio/file4 (176887.20 MB/s) 1024.00 MB in 0.01s
random read on /test/aiodio/file2 (40123.82 MB/s) 1024.00 MB in 0.03s
random read on /test/aiodio/file5 (95934.05 MB/s) 1024.00 MB in 0.01s
random read on /test/aiodio/file8 (4322.75 MB/s) 1024.00 MB in 0.24s
random read on /test/aiodio/file7 (4224.42 MB/s) 1024.00 MB in 0.24s
thread 5 random read totals (2815.35 MB/s) 1024.00 MB in 0.36s
thread 7 random read totals (2770.62 MB/s) 1024.00 MB in 0.37s
thread 4 random read totals (2728.51 MB/s) 1024.00 MB in 0.38s
thread 1 random read totals (2720.31 MB/s) 1024.00 MB in 0.38s
thread 6 random read totals (2860.49 MB/s) 1024.00 MB in 0.36s
thread 2 random read totals (1153.87 MB/s) 1024.00 MB in 0.89s
thread 3 random read totals (1153.87 MB/s) 1024.00 MB in 0.89s
random read on /test/aiodio/junkfile (1.63 MB/s) 4.00 MB in 2.46s
thread 0 random read totals (1.62 MB/s) 4.00 MB in 2.47s
random read throughput (2905.81 MB/s) 7172.00 MB in 2.47s min transfer 4.00MB

real    0m5.748s
user    0m0.050s
sys     0m0.620s
[root@tst122 ~]#                                                                
                       

RHEL4U2-32b domU guest:
-----------------------
[root@vs162 testcases]# time ./kernel/io/ltp-aiodio/aio-stress -I500 -o3 -O 
-r512 -t8 /test/aiodio/junkfile /test/aiodio/file2 /test/aiodio/file7 
/test/aiodio/file8 /test/aiodio/file3 /test/aiodio/file4 /test/aiodio/file5 
/test/aiodio/file6
adding stage random read
starting with random read
file size 1024MB, record size 512KB, depth 64, ios per iteration 8
max io_submit 64, buffer alignment set to 4KB
threads 8 files 8 contexts 1 context offset 2MB verification off
Running multi thread version num_threads:8
random read on /test/aiodio/file2 (7.30 MB/s) 1024.00 MB in 140.35s
thread 1 random read totals (7.27 MB/s) 1024.00 MB in 140.84s
random read on /test/aiodio/file4 (0.06 MB/s) 8.00 MB in 141.47s
thread 5 random read totals (0.06 MB/s) 8.00 MB in 141.60s
random read on /test/aiodio/file5 (0.03 MB/s) 4.00 MB in 141.83s
thread 6 random read totals (0.03 MB/s) 4.00 MB in 141.94s
random read on /test/aiodio/file8 (0.03 MB/s) 4.00 MB in 142.82s
thread 3 random read totals (0.03 MB/s) 4.00 MB in 142.92s
random read on /test/aiodio/file3 (0.03 MB/s) 4.00 MB in 143.04s
thread 4 random read totals (0.03 MB/s) 4.00 MB in 143.08s
random read on /test/aiodio/file6 (0.03 MB/s) 4.00 MB in 143.45s
thread 7 random read totals (0.03 MB/s) 4.00 MB in 143.52s
random read on /test/aiodio/file7 (0.03 MB/s) 4.00 MB in 143.80s
thread 2 random read totals (0.03 MB/s) 4.00 MB in 143.91s
random read on /test/aiodio/junkfile (0.03 MB/s) 4.00 MB in 145.83s
thread 0 random read totals (0.03 MB/s) 4.00 MB in 145.92s
random read throughput (7.24 MB/s) 1056.00 MB in 145.92s min transfer 4.00MB

real    2m31.215s
user    0m0.724s
sys     0m1.916s
[root@vs162 testcases]#

RHEL3U6 domU guest:
-------------------
[root@vs174 testcases]# time ./kernel/io/ltp-aiodio/aio-stress -I500 -o3 -O 
-r512 -t8 /test/aiodio/junkfile /test/aiodio/file2 /test/aiodio/file7 
/test/aiodio/file8 /test/aiodio/file3 /test/aiodio/file4 /test/aiodio/file5 
/test/aiodio/file6
adding stage random read
starting with random read
file size 1024MB, record size 512KB, depth 64, ios per iteration 8
max io_submit 64, buffer alignment set to 4KB
threads 8 files 8 contexts 1 context offset 2MB verification off
Running multi thread version num_threads:8

random read on /test/aiodio/file7 (0.10 MB/s) 1024.00 MB in 10322.46s
thread 2 random read totals (0.10 MB/s) 1024.00 MB in 10323.40s
random read on /test/aiodio/file4 (0.10 MB/s) 1024.00 MB in 10332.31s
thread 5 random read totals (0.10 MB/s) 1024.00 MB in 10332.59s
random read on /test/aiodio/file5 (0.10 MB/s) 1024.00 MB in 10344.51s
thread 6 random read totals (0.10 MB/s) 1024.00 MB in 10344.56s
random read on /test/aiodio/file6 (0.10 MB/s) 1024.00 MB in 10346.31s
thread 7 random read totals (0.10 MB/s) 1024.00 MB in 10346.88s
random read on /test/aiodio/file3 (0.10 MB/s) 1024.00 MB in 10353.64s
thread 4 random read totals (0.10 MB/s) 1024.00 MB in 10353.68s
random read on /test/aiodio/file8 (0.09 MB/s) 972.00 MB in 10373.37s
random read on /test/aiodio/file2 (0.08 MB/s) 860.00 MB in 10373.57s
random read on /test/aiodio/junkfile (0.08 MB/s) 860.00 MB in 10373.59s
thread 1 random read totals (0.08 MB/s) 860.00 MB in 10373.77s
thread 3 random read totals (0.09 MB/s) 972.00 MB in 10373.76s
thread 0 random read totals (0.08 MB/s) 860.00 MB in 10373.77s
random read throughput (0.75 MB/s) 7812.00 MB in 10373.77s min transfer 860.00MB

real    173m1.057s
user    0m1.140s
sys     0m49.050s
[root@vs174 testcases]#



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.