[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/6] block: enable multi-page bvec for passthrough IO



Hi,

Now the whole IO stack is capable of handling multi-page bvec, and it has
been enabled in the normal FS IO path. However, it isn't done for
passthrough IO.

Without enabling multi-bvec for passthough IO, we won't go ahead for
optimizing related IO paths, such as bvec merging, bio_add_pc_page
simplification.

This patch enables multi-page bvec for passthrough IO. Turns out
bio_add_pc_page() is simpliefied a lot, especially the physical segment
number of passthrough bio is always same with bio.bi_vcnt. Also the
bvec merging inside bio is killed.

blktests(block/029) is added for covering passthough IO path, and this
patchset does pass the new block/029 test.

        https://marc.info/?l=linux-block&m=155175063417139&w=2


Ming Lei (6):
  block: pass page to xen_biovec_phys_mergeable
  block: don't merge adjacent bvecs to one segment in bio
    blk_queue_split
  block: check if page is mergeable in one helper
  block: put the same page when adding it to bio
  block: enable multi-page bvec for passthrough IO
  block: don't check if adjacent bvecs in one bio can be mergeable

 block/bio.c            | 121 +++++++++++++++++++++++++++----------------------
 block/blk-merge.c      |  98 ++++++++++++++++++++-------------------
 block/blk.h            |   2 +-
 drivers/xen/biomerge.c |   5 +-
 include/linux/bio.h    |  12 ++++-
 include/xen/xen.h      |   2 +-
 6 files changed, 134 insertions(+), 106 deletions(-)

Cc: ris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
Cc: Juergen Gross <jgross@xxxxxxxx>
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
Cc: Omar Sandoval <osandov@xxxxxx>
Cc: Christoph Hellwig <hch@xxxxxx>


-- 
2.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.