[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] blkfront resource use



>>> On 07.02.17 at 14:59, <roger.pau@xxxxxxxxxx> wrote:
> On Mon, Feb 06, 2017 at 03:53:58AM -0700, Jan Beulich wrote:
>> Interestingly I've found 
>> https://groups.google.com/forum/#!topic/linux.kernel/N6Q171xkIkM 
>> when looking around - is there a reason this or something similar
>> never made it into the driver? Without such adjustment a single
>> spike in I/O can lead to a significant amount of grants to be "lost"
>> in the queue of a single frontend instance.
> 
> IIRC we didn't go for that solution and instead implemented a limit in blkback
> that can be set by the system administrator. But yes, it is still possible for
> a single blkfront instance to use a huge amount of grants, although only
> temporarily. When the IO spike is done (ie: the bio is done) blkfront should
> release the grants. If this is a system doing a huge amount of IO the default
> amount of grant tables pages should probably be increased.

What would enforce that releasing? The driver itself doesn't appear
to actively do anything here. Is that because the backend would
limit the number of grants it keeps mapped (which the frontend then
notices, releasing the grants)?

In the end, any number of grants not used over an extended periods
of time are a waste of resources. Once again, the situation all this
came up with was a guest with over a hundred block devices. If for
every one of them the driver keeps meaningful amount of grants in
its internal queues, there could (in the default config) be over 100k
grants no-one can make use of. In the worst case even splitting
requests may then not help, when enough grants aren't available
for I/O of just a single sector.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.