capacity operations
bandwidth
pool
used avail read write
read write
----------
----- ----- ----- ----- ----- -----
vmdisk
188G 1.18T 287 2.57K 2.24M
19.7M
mirror
62.7G 401G 142 890 1.12M
6.65M
c8t2d0 - - 66
352 530K 6.49M
c9t0d0 - - 76
302 612K 6.65M
mirror
62.7G 401G 83 856 670K
6.39M
c8t3d0 - - 43
307 345K 6.40M
c9t1d0 - - 40
293 325K 6.40M
mirror
62.7G 401G 60 886 485K
6.68M
c8t4d0 - - 50
373 402K 6.68M
c9t4d0 - - 10
307 82.9K 6.68M
c9t5d0
77.1G 389G 472 38 3.86M
3.50M
----------
----- ----- ----- ----- ----- -----
capacity operations
bandwidth
pool
used avail read write read
write
----------
----- ----- ----- ----- ----- -----
vmdisk
188G 1.18T 75 3.52K 594K
27.1M
mirror
62.7G 401G 30 1.16K 239K
8.89M
c8t2d0 - - 10
464 86.6K 8.89M
c9t0d0 - - 19
350 209K 8.89M
mirror
62.7G 401G 0 1.18K
0 9.10M
mirror
62.7G 401G 45 1.18K 355K
9.11M
c8t4d0 - - 37
469 354K 9.11M
c9t4d0 - - 7
391 57.7K 9.11M
c9t5d0
77.1G 389G 514 157 4.14M
17.4M
----------
----- ----- ----- ----- ----- -----
Can
you tell me why this happens? Is this behavior coming from Linux or Xen or ZFS?
I do notice that iostat reports an iowait of 25%, but I don't know who is
causing this bottleneck amongst them.
Instead
of writing a 800 meg file, if I write an 8gb file, the performance is very poor
(40 MB/s or so) and again, despite there is a long iowait after the dd command
returns.
Any
help would be really appreciated.