[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] [PATCH] Xenvbd Refactoring



I’ll rebase to the tip of master and build a v2 series – I’ll pull some of the later patches with useful overrides earlier in the series.

 

Owen

 

From: Paul Durrant
Sent: 30 May 2017 17:11
To: Paul Durrant; Owen Smith; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
Subject: RE: [PATCH] Xenvbd Refactoring

 

I’ve taken the first four patches of the series, since they are largely cosmetic AFAICT, and I see no adverse effects in testing.

 

  Paul

 

From: win-pv-devel [mailto:win-pv-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Paul Durrant
Sent: 18 May 2017 18:37
To: Owen Smith <owen.smith@xxxxxxxxxx>; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
Subject: Re: [win-pv-devel] [PATCH] Xenvbd Refactoring

 

A quick’n’dirty check using a checked build on a recent Windows 10 64-bit is giving me odd results…

 

With a queue depth of 1 I’m seeing ~6k IOPS without the patches and ~18k with, which is clearly good. But, with a queue depth of 32, I see ~46k IOPS without the patches but ~20k with. That’s a pretty substantial fall-off so I think more work is needed before all of these can go into master.

 

Cheers,

 

  Paul

 

 

From: win-pv-devel [mailto:win-pv-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Owen Smith
Sent: 15 May 2017 16:53
To: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
Subject: [win-pv-devel] [PATCH] Xenvbd Refactoring

 

It seems I’ve experienced some mail delivery failures trying to send a 26 patch series.

Rather than resend the series, I’ve put a branch up to examine.

 

Original [00/26] Summary:

 

All patches should be taken together.

In an attempt to improve maintainability and code readability, refactor

xenvbd.

This patch series also moves the SRB queueing responsibility to storport,

by preparing blkif requests during BuildIo and appending blkif requests to

a pending queue for the ring during StartIo. This should allow a reduction

in processing during the DPC (as the DPC will only complete blkif responses,

and the corresponding SRB, and submit blkif requests) and take advantage of

BuildIo's concurrent calls.

 

Tested with HLK-1703 on Windows 10 x64 (build 15063.rs2_release.170317-1834)

All tests (including non-WHQL tests) passed

 

IoMeter results compared to previous xenvbd and emulated on single socket

Xeon X3450 @ 2.6GHz (guest has 2 vCPUs), with 8GB RAM (guest has 4GB, 1GB ramfs)

(1 worker, queue-depth 32, 5 minutes, 4KB 20% read, 80% random, 128KB-aligned)

Tested, StorageBacking, IOPs

Emulated, HDD, 726.75

Emulated, RAM, 695.93

XenVbd 8.2, HDD, 525.52

XenVbd 8.2, RAM, 4330.51

ThesePatches, HDD, 2475.23

ThesePatches, RAM, 3619.06

Note: There was a large ammount of variance in all the results during reruns,

with an approximate 10-20% variance in the same setups. Tests with PV drivers

backed by RAM were maxing guest CPU utilization.

 

Owen Smith (26):

  Rename Fdo -> Adapter Remove Adapter reference counts

  Rename Pdo -> Target

  Tidy up Driver.h/.c

  Refactor Adapter.c

  Pass PXENVBD_SRBEXT, not PSCSI_REQUEST_BLOCK

  Refactor target.c

  Move ScatterGather list iteration to adapter.c

  Rename SrbExt::Srb to OriginalReq

  Move non-queue-srb handling to BuildIo

  Add override to disable specific features

  Prepare requests in BuildIo

  Move SrbExt cleanup into inline function

  Query for Cache Interface

  Use CACHE interface instead of lookaside lists

  Fix Indirect requests

  Move BlockRingPoll inline

  Move Prepared/Submitted to BlockRing

  Refactor Inquiry 0x83 handler

  Remove TargetQueueSrb, fold into caller

  Set Queue Depth, some minor fixes

  Add overrides for MaxTransferLength and MaxPhysicalBreaks

  Rename Prepared to Queued

  Track Queued/Submitted/Completed counts

  Read overrides at start of day

  Fix discard debug statements

  Add override for ring size

 

src/xenvbd/adapter.c         | 2163 +++++++++++++++++++++++++++++

src/xenvbd/adapter.h         |  128 ++

src/xenvbd/blockring.c       | 1040 +++++++++-----

src/xenvbd/blockring.h       |   44 +-

src/xenvbd/driver.c          |  480 ++-----

src/xenvbd/driver.h          |   73 +-

src/xenvbd/fdo.c             | 2206 ------------------------------

src/xenvbd/fdo.h             |  179 ---

src/xenvbd/frontend.c        | 1771 ------------------------

src/xenvbd/frontend.h        |  195 ---

src/xenvbd/granter.c         |  198 +--

src/xenvbd/granter.h         |   48 +-

src/xenvbd/notifier.c        |  335 -----

src/xenvbd/notifier.h        |  101 --

src/xenvbd/pdo.c             | 2732 -------------------------------------

src/xenvbd/pdo.h             |  229 ----

src/xenvbd/pdoinquiry.c      |  554 --------

src/xenvbd/pdoinquiry.h      |   65 -

src/xenvbd/queue.c           |  139 --

src/xenvbd/queue.h           |   86 --

src/xenvbd/srbext.h          |   85 +-

src/xenvbd/target.c          | 3100 ++++++++++++++++++++++++++++++++++++++++++

src/xenvbd/target.h          |  152 +++

vs2012/xenvbd/xenvbd.vcxproj |    8 +-

vs2013/xenvbd/xenvbd.vcxproj |    8 +-

vs2015/xenvbd/xenvbd.vcxproj |    8 +-

26 files changed, 6606 insertions(+), 9521 deletions(-)

create mode 100644 src/xenvbd/adapter.c

create mode 100644 src/xenvbd/adapter.h

delete mode 100644 src/xenvbd/fdo.c

delete mode 100644 src/xenvbd/fdo.h

delete mode 100644 src/xenvbd/frontend.c

delete mode 100644 src/xenvbd/frontend.h

delete mode 100644 src/xenvbd/notifier.c

delete mode 100644 src/xenvbd/notifier.h

delete mode 100644 src/xenvbd/pdo.c

delete mode 100644 src/xenvbd/pdo.h

delete mode 100644 src/xenvbd/pdoinquiry.c

delete mode 100644 src/xenvbd/pdoinquiry.h

delete mode 100644 src/xenvbd/queue.c

delete mode 100644 src/xenvbd/queue.h

create mode 100644 src/xenvbd/target.c

create mode 100644 src/xenvbd/target.h

 

--

2.8.3

 

A branch showing the patches is available at:

https://github.com/OwenSmith/xenvbd/tree/experiment

_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.