[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] xen-blkfront: increase the default number of indirect segments



>>> On 21.06.13 at 12:56, Roger Pau Monne <roger.pau@xxxxxxxxxx> wrote:
> When using certain storage devices (like RAID) having a bigger number
> of segments per request provides better performance.

And there's no drawback (higher memory foot print if nothing else)
on "certain other storage devices"? Adjusting defaults just because
it is beneficial for some devices, but may adversely affect others
is not really a good thing - in such an event, you'd be better off
determining the default dynamically per device.

Jan

> Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
> Reported-by: Steven Haigh <netwiz@xxxxxxxxx>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> ---
>  drivers/block/xen-blkfront.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 2e1ee34..4e3ab34 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -94,9 +94,9 @@ static const struct block_device_operations 
> xlvbd_block_fops;
>   * by the backend driver.
>   */
>  
> -static unsigned int xen_blkif_max_segments = 32;
> +static unsigned int xen_blkif_max_segments = 64;
>  module_param_named(max, xen_blkif_max_segments, int, S_IRUGO);
> -MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests 
> (default is 32)");
> +MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests 
> (default is 64)");
>  
>  #define BLK_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE)
>  
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.