[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v4 1/3] xen-blkback: add a parameter for disabling of persistent grants
From: SeongJae Park <sjpark@xxxxxxxxx> Persistent grants feature provides high scalability. On some small systems, however, it could incur data copy overheads[1] and thus it is required to be disabled. But, there is no option to disable it. For the reason, this commit adds a module parameter for disabling of the feature. [1] https://wiki.xen.org/wiki/Xen_4.3_Block_Protocol_Scalability Signed-off-by: Anthony Liguori <aliguori@xxxxxxxxxx> Signed-off-by: SeongJae Park <sjpark@xxxxxxxxx> Reviewed-by: Juergen Gross <jgross@xxxxxxxx> Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- .../ABI/testing/sysfs-driver-xen-blkback | 9 ++++++++ drivers/block/xen-blkback/xenbus.c | 22 ++++++++++++++----- 2 files changed, 25 insertions(+), 6 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback index ecb7942ff146..ac2947b98950 100644 --- a/Documentation/ABI/testing/sysfs-driver-xen-blkback +++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback @@ -35,3 +35,12 @@ Description: controls the duration in milliseconds that blkback will not cache any page not backed by a grant mapping. The default is 10ms. + +What: /sys/module/xen_blkback/parameters/feature_persistent +Date: September 2020 +KernelVersion: 5.10 +Contact: SeongJae Park <sjpark@xxxxxxxxx> +Description: + Whether to enable the persistent grants feature or not. Note + that this option only takes effect on newly created backends. + The default is Y (enable). diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c index b9aa5d1ac10b..8fc34211dc8b 100644 --- a/drivers/block/xen-blkback/xenbus.c +++ b/drivers/block/xen-blkback/xenbus.c @@ -474,6 +474,12 @@ static void xen_vbd_free(struct xen_vbd *vbd) vbd->bdev = NULL; } +/* Enable the persistent grants feature. */ +static bool feature_persistent = true; +module_param(feature_persistent, bool, 0644); +MODULE_PARM_DESC(feature_persistent, + "Enables the persistent grants feature"); + static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle, unsigned major, unsigned minor, int readonly, int cdrom) @@ -519,6 +525,8 @@ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle, if (q && blk_queue_secure_erase(q)) vbd->discard_secure = true; + vbd->feature_gnt_persistent = feature_persistent; + pr_debug("Successful creation of handle=%04x (dom=%u)\n", handle, blkif->domid); return 0; @@ -906,7 +914,8 @@ static void connect(struct backend_info *be) xen_blkbk_barrier(xbt, be, be->blkif->vbd.flush_support); - err = xenbus_printf(xbt, dev->nodename, "feature-persistent", "%u", 1); + err = xenbus_printf(xbt, dev->nodename, "feature-persistent", "%u", + be->blkif->vbd.feature_gnt_persistent); if (err) { xenbus_dev_fatal(dev, err, "writing %s/feature-persistent", dev->nodename); @@ -1067,7 +1076,6 @@ static int connect_ring(struct backend_info *be) { struct xenbus_device *dev = be->dev; struct xen_blkif *blkif = be->blkif; - unsigned int pers_grants; char protocol[64] = ""; int err, i; char *xspath; @@ -1093,9 +1101,11 @@ static int connect_ring(struct backend_info *be) xenbus_dev_fatal(dev, err, "unknown fe protocol %s", protocol); return -ENOSYS; } - pers_grants = xenbus_read_unsigned(dev->otherend, "feature-persistent", - 0); - blkif->vbd.feature_gnt_persistent = pers_grants; + if (blkif->vbd.feature_gnt_persistent) + blkif->vbd.feature_gnt_persistent = + xenbus_read_unsigned(dev->otherend, + "feature-persistent", 0); + blkif->vbd.overflow_max_grants = 0; /* @@ -1118,7 +1128,7 @@ static int connect_ring(struct backend_info *be) pr_info("%s: using %d queues, protocol %d (%s) %s\n", dev->nodename, blkif->nr_rings, blkif->blk_protocol, protocol, - pers_grants ? "persistent grants" : ""); + blkif->vbd.feature_gnt_persistent ? "persistent grants" : ""); ring_page_order = xenbus_read_unsigned(dev->otherend, "ring-page-order", 0); -- 2.17.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |