[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Segments can span multiple clusters with tap:qcow


  • To: Mark McLoughlin <markmc@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxxxxxxxx>
  • Date: Thu, 26 Apr 2007 10:09:10 +0100
  • Delivery-date: Thu, 26 Apr 2007 02:08:00 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AceH4ozsy6dcevPVEduFMQAX8io7RQ==
  • Thread-topic: [Xen-devel] [PATCH] Segments can span multiple clusters with tap:qcow

On 25/4/07 21:41, "Mark McLoughlin" <markmc@xxxxxxxxxx> wrote:

> In blktap's qcow we need split up read/write requests if the requests
> span multiple clusters. However, with our MAX_AIO_REQUESTS define we
> assume that there is only ever a single aio request per tapdisk request
> and under heavy i/o we can run out of room causing us to cancel
> requests.
> 
> The attached patch dynamically allocates (based on cluster_bits) the
> various io request queues the driver maintains.

The current code allocates aio-request info for every segment in a request
ring (MAX_AIO_REQUESTS == BLK_RING_SIZE * MAX_SEGMENTS_PER_REQUEST). This
patch seems to take into account that each segment (part-of-page) can itself
be split into clusters, hence the page_size/cluster_size calculation, but
shouldn't this be multiplied by the existing MAX_AIO_REQUESTS? Otherwise you
provide only enough aio requests for one segment at a time, rather than a
request ring's worth of segments?

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.