|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request in blkfront
Hi, list.
The max segments for request in VBD queue is 11, while for Linux OS/ other VMM,
the parameter is set to 128 in default. This may be caused by the limited size
of ring between Front/Back. So I guess whether we can put segment data into
another ring and dynamic use them for the single request's need. Here is
prototype which don't do much test, but it can work on Linux 64 bits 3.4.6
kernel. I can see the CPU% can be reduced to 1/3 compared to original in
sequential test. But it bring some overhead which will make random IO's cpu
utilization increase a little.
Here is a short version data use only 1K random read and 64K sequential read in
direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got
form xentop.
Read 1K random IOPS Dom0 CPU DomU CPU%
W 52005.9 86.6 71
W/O 52123.1 85.8 66.9
Read 64K seq BW MB/s Dom0 CPU DomU CPU%
W 250 27.1 10.6
W/O 250 62.6 31.1
The patch will be simple if we only use new methods. But we need consider that
user may use new kernel as backend while an older one as frontend. Also need
considerate live migration case. So the change become huge...
[RFC v1 1/5]
In order to add new segment ring, refactoring the original code, split
some methods related with ring operation.
[RFC v1 2/5]
Add the segment ring support in blkfront. Most of code is about
suspend/recover.
[RFC v1 3/5]
As the same, need refractor the original code in blkback.
[RFC v1 4/5]
In order to support different type of ring type in blkback, make the
pending_req list per disk.
[RFC v1 5/5]
Add the segment ring support in blkback.
-ronghui
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |