[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 4/4] xen, blkback: add support for multiple block rings

On Fri, Aug 22, 2014 at 02:15:58PM +0100, David Vrabel wrote:
> On 22/08/14 12:20, Arianna Avanzini wrote:
> > This commit adds to xen-blkback the support to retrieve the block
> > layer API being used and the number of available hardware queues,
> > in case the block layer is using the multi-queue API. This commit
> > also lets the driver advertise the number of available hardware
> > queues to the frontend via XenStore, therefore allowing for actual
> > multiple I/O rings to be used.
> Does it make sense for number of queues should be dependent on the
> number of queues available in the underlying block device?  

Thank you for raising that point. It probably is not the best solution.

Bob Liu suggested to have the number of I/O rings depend on the number
of vCPUs in the driver domain. Konrad Wilk suggested to compute the
number of I/O rings according to the following formula to preserve the
possibility to explicitly define the number of hardware queues to be
exposed to the frontend:
what_backend_exposes = some_module_parameter ? :
                   min(nr_online_cpus(), nr_hardware_queues()).
io_rings = min(nr_online_cpus(), what_backend_exposes);

(Please do correct me if I misunderstood your point)

> What
> behaviour do we want when a domain is migrated to a host with different
> storage?

This first patchset does not include support to migrate a multi-queue-capable
domU to a host with different storage. The second version, which I am posting
now, includes it. The behavior I have implemented as of now lets the frontend
use the same number of rings, if the backend is still multi-queue-capable
after the migration, otherwise it exposes one only ring.

> Can you split this patch up as well?

Sure, thank you for the comments.

> David

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.