[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 03/14] rbd: increase io_opt again



On Fri, May 31, 2024 at 9:48 AM Christoph Hellwig <hch@xxxxxx> wrote:
>
> Commit 16d80c54ad42 ("rbd: set io_min, io_opt and discard_granularity to
> alloc_size") lowered the io_opt size for rbd from objset_bytes which is
> 4MB for typical setup to alloc_size which is typically 64KB.
>
> The commit mostly talks about discard behavior and does mention io_min
> in passing.  Reducing io_opt means reducing the readahead size, which
> seems counter-intuitive given that rbd currently abuses the user
> max_sectors setting to actually increase the I/O size.  Switch back
> to the old setting to allow larger reads (the readahead size despite it's
> name actually limits the size of any buffered read) and to prepare
> for using io_opt in the max_sectors calculation and getting drivers out
> of the business of overriding the max_user_sectors value.
>
> Signed-off-by: Christoph Hellwig <hch@xxxxxx>
> ---
>  drivers/block/rbd.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index 26ff5cd2bf0abc..46dc487ccc17eb 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -4955,8 +4955,8 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
>         struct queue_limits lim = {
>                 .max_hw_sectors         = objset_bytes >> SECTOR_SHIFT,
>                 .max_user_sectors       = objset_bytes >> SECTOR_SHIFT,
> +               .io_opt                 = objset_bytes,
>                 .io_min                 = rbd_dev->opts->alloc_size,
> -               .io_opt                 = rbd_dev->opts->alloc_size,
>                 .max_segments           = USHRT_MAX,
>                 .max_segment_size       = UINT_MAX,
>         };
> --
> 2.43.0
>

Acked-by: Ilya Dryomov <idryomov@xxxxxxxxx>

Thanks,

                Ilya



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.