[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen-blkback: use bigger array for batch gnt operations



On 01/08/13 14:30, David Vrabel wrote:
> On 01/08/13 13:08, Roger Pau Monne wrote:
>> Right now the maximum number of grant operations that can be batched
>> in a single request is BLKIF_MAX_SEGMENTS_PER_REQUEST (11). This was
>> OK before indirect descriptors because the maximum number of segments
>> in a request was 11, but with the introduction of indirect
>> descriptors the maximum number of segments in a request has been
>> increased past 11.
>>
>> The memory used by the structures that are passed in the hypercall was
>> allocated from the stack, but if we have to increase the size of the
>> array we can no longer use stack memory, so we have to pre-allocate
>> it.
>>
>> This patch increases the maximum size of batch grant operations and
>> replaces the use of stack memory with pre-allocated memory, that is
>> reserved when the blkback instance is initialized.
> [...]
>> --- a/drivers/block/xen-blkback/xenbus.c
>> +++ b/drivers/block/xen-blkback/xenbus.c
> [...]
>> @@ -148,6 +155,16 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
>>                      if (!req->indirect_pages[j])
>>                              goto fail;
>>              }
>> +            req->map = kcalloc(GNT_OPERATIONS_SIZE, sizeof(req->map[0]), 
>> GFP_KERNEL);
>> +            if (!req->map)
>> +                    goto fail;
>> +            req->unmap = kcalloc(GNT_OPERATIONS_SIZE, 
>> sizeof(req->unmap[0]), GFP_KERNEL);
>> +            if (!req->unmap)
>> +                    goto fail;
>> +            req->pages_to_gnt = kcalloc(GNT_OPERATIONS_SIZE, 
>> sizeof(req->pages_to_gnt[0]),
>> +                                        GFP_KERNEL);
>> +            if (!req->pages_to_gnt)
>> +                    goto fail;
> 
> Do these need to be per-request? Or can they all share a common set of
> arrays?

No, we cannot share them unless we serialize the unmap of grants using a
spinlock (like we do when writing the reponse on the ring).


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.