[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 7/8] xen-blkback: frontend feature control



On Thu, Nov 02, 2017 at 06:06:15PM +0000, Joao Martins wrote:
> Toolstack may write values to the "require" subdirectory in the
> backend main directory (e.g. backend/vbd/X/Y/). Read these values
> and use them when announcing those to the frontend. When backend
> scans frontend features the values set in the require directory
> take precedence, hence making no significant changes in feature
> parsing.
> 
> xenbus_read_feature() reads from require subdirectory and prints that
> value and otherwise writing a default_val in the entry. We then replace
> all instances of xenbus_printf to use these previously seeded features.
> A backend_features struct is introduced and all values set there are
> used in place of the module parameters being used.
> 
> Note, however that feature-barrier, feature-flush-support and
> feature-discard aren't probed because first two are physical
> device dependent and feature-discard already has tunables to
> adjust.
> 
> Signed-off-by: Joao Martins <joao.m.martins@xxxxxxxxxx>
> ---
>  drivers/block/xen-blkback/blkback.c |  2 +-
>  drivers/block/xen-blkback/common.h  |  1 +
>  drivers/block/xen-blkback/xenbus.c  | 66 
> ++++++++++++++++++++++++++++++++-----
>  3 files changed, 60 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/block/xen-blkback/blkback.c 
> b/drivers/block/xen-blkback/blkback.c
> index c90e933330b6..05b3f124c871 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -1271,7 +1271,7 @@ static int dispatch_rw_block_io(struct xen_blkif_ring 
> *ring,
>           unlikely((req->operation != BLKIF_OP_INDIRECT) &&
>                    (nseg > BLKIF_MAX_SEGMENTS_PER_REQUEST)) ||
>           unlikely((req->operation == BLKIF_OP_INDIRECT) &&
> -                  (nseg > MAX_INDIRECT_SEGMENTS))) {
> +                  (nseg > ring->blkif->vbd.max_indirect_segs))) {
>               pr_debug("Bad number of segments in request (%d)\n", nseg);
>               /* Haven't submitted any bio's yet. */
>               goto fail_response;
> diff --git a/drivers/block/xen-blkback/common.h 
> b/drivers/block/xen-blkback/common.h
> index a7832428e0da..ff12f2d883b9 100644
> --- a/drivers/block/xen-blkback/common.h
> +++ b/drivers/block/xen-blkback/common.h
> @@ -229,6 +229,7 @@ struct xen_vbd {
>       unsigned int            discard_secure:1;
>       unsigned int            feature_gnt_persistent:1;
>       unsigned int            overflow_max_grants:1;
> +     unsigned int            max_indirect_segs;

Please place this above, in order to get less padding between fields.

>  };
>  
>  struct backend_info;
> diff --git a/drivers/block/xen-blkback/xenbus.c 
> b/drivers/block/xen-blkback/xenbus.c
> index 48d796ea3626..31683f29d5fb 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -25,11 +25,19 @@
>  
>  /* On the XenBus the max length of 'ring-ref%u'. */
>  #define RINGREF_NAME_LEN (20)
> +#define REQUIRE_PATH_LEN (256)
> +
> +struct backend_features {
> +     unsigned max_queues;
> +     unsigned max_ring_order;

unsigned int.

> +     unsigned pers_grants;

bool?

Since you are already doing this, is it worth adding support to
disable other features like flush or discard?

> +};
>  
>  struct backend_info {
>       struct xenbus_device    *dev;
>       struct xen_blkif        *blkif;
>       struct xenbus_watch     backend_watch;
> +     struct backend_features features;
>       unsigned                major;
>       unsigned                minor;
>       char                    *mode;
> @@ -602,6 +610,40 @@ int xen_blkbk_barrier(struct xenbus_transaction xbt,
>       return err;
>  }
>  
> +static int xenbus_read_feature(const char *dir, const char *node,
> +                            unsigned int default_val)
> +{
> +     char reqnode[REQUIRE_PATH_LEN];
> +     unsigned int val;
> +
> +     snprintf(reqnode, REQUIRE_PATH_LEN, "%s/require", dir);
> +     val = xenbus_read_unsigned(reqnode, node, default_val);
> +     return val;
> +}

You could avoid the temporary buffer by doing something like:

> +
> +static void xen_blkbk_probe_features(struct xenbus_device *dev,
> +                                  struct backend_info *be)
> +{
> +     struct backend_features *ft = &be->features;
> +     struct xen_vbd *vbd = &be->blkif->vbd;
> +

#define READ_FEAT(path, feat, default) \
        xenbus_read_unsigned(path, "require/" node, default)

> +     vbd->max_indirect_segs = xenbus_read_feature(dev->nodename,
> +                                             "feature-max-indirect-segments",
> +                                             MAX_INDIRECT_SEGMENTS);
> +
> +     ft->max_queues = xenbus_read_feature(dev->nodename,
> +                                          "multi-queue-max-queues",
> +                                          xenblk_max_queues);
> +
> +     ft->max_ring_order = xenbus_read_feature(dev->nodename,
> +                                              "max-ring-page-order",
> +                                              xen_blkif_max_ring_order);
> +
> +     ft->pers_grants = xenbus_read_feature(dev->nodename,
> +                                           "feature-persistent",
> +                                           1);
#undef READ_FEAT

Aren't you missing some check here or elsewhere to make sure the
proposed values don't exceed the maximum ones supported by blback?

I would expect something like:

vbd->max_indirect_segs = min(vbd->max_indirect_segs, MAX_INDIRECT_SEGMENTS);

And the same with max_queues and max_ring_oder.

It might also be a good idea to print some message when the proposed
value by the toolstack is not supported by the backend.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.