[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 03/10] xen/blkfront: pseudo support for multi hardware queues/rings



On 11/04/2015 03:44 AM, Konrad Rzeszutek Wilk wrote:
> On Mon, Nov 02, 2015 at 12:21:39PM +0800, Bob Liu wrote:
>> Preparatory patch for multiple hardware queues (rings). The number of
>> rings is unconditionally set to 1, larger number will be enabled in next
>> patch so as to make every single patch small and readable.
> 
> s/next patch/"xen/blkfront: negotiate number of queues/rings to be used with 
> backend"
> 
>>
>> Signed-off-by: Bob Liu <bob.liu@xxxxxxxxxx>
>> ---
>>  drivers/block/xen-blkfront.c | 327 
>> +++++++++++++++++++++++++------------------
>>  1 file changed, 188 insertions(+), 139 deletions(-)
>>
>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>> index 2a557e4..eab78e7 100644
>> --- a/drivers/block/xen-blkfront.c
>> +++ b/drivers/block/xen-blkfront.c
>> @@ -145,6 +145,7 @@ struct blkfront_info
>>      int vdevice;
>>      blkif_vdev_t handle;
>>      enum blkif_state connected;
>> +    /* Number of pages per ring buffer */
> 
> Missing full stop, aka '.'.
> 
>>      unsigned int nr_ring_pages;
>>      struct request_queue *rq;
>>      struct list_head grants;
>> @@ -158,7 +159,8 @@ struct blkfront_info
>>      unsigned int max_indirect_segments;
>>      int is_ready;
>>      struct blk_mq_tag_set tag_set;
>> -    struct blkfront_ring_info rinfo;
>> +    struct blkfront_ring_info *rinfo;
>> +    unsigned int nr_rings;
>>  };
>>  
>>  static unsigned int nr_minors;
>> @@ -190,7 +192,7 @@ static DEFINE_SPINLOCK(minor_lock);
>>      ((_segs + SEGS_PER_INDIRECT_FRAME - 1)/SEGS_PER_INDIRECT_FRAME)
>>  
>>  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
>> -static int blkfront_gather_backend_features(struct blkfront_info *info);
>> +static void blkfront_gather_backend_features(struct blkfront_info *info);
>>  
>>  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
>>  {
>> @@ -443,12 +445,13 @@ static int blkif_queue_request(struct request *req, 
>> struct blkfront_ring_info *r
>>               */
>>              max_grefs += INDIRECT_GREFS(req->nr_phys_segments);
>>  
>> -    /* Check if we have enough grants to allocate a requests */
>> -    if (info->persistent_gnts_c < max_grefs) {
>> +    /* Check if we have enough grants to allocate a requests, we have to
>> +     * reserve 'max_grefs' grants because persistent grants are shared by 
>> all
>> +     * rings */
> 
> Missing full stop.
> 
>> +    if (0 < max_grefs) {
> 
> <blinks> ? 0!?
> 
> max_grefs will at least be BLKIF_MAX_SEGMENTS_PER_REQUEST
> so this will always be true.
> 

No,  max_grefs = req->nr_phys_segments;

It's 0 in some cases(flush req?), and gnttable_alloc_grant_references() can not 
handle 0 as the parameter.

> In which ase why not just ..
>>              new_persistent_gnts = 1;
>>              if (gnttab_alloc_grant_references(
>> -                max_grefs - info->persistent_gnts_c,
>> -                &gref_head) < 0) {
>> +                max_grefs, &gref_head) < 0) {
>>                      gnttab_request_free_callback(
>>                              &rinfo->callback,
>>                              blkif_restart_queue_callback,
> 
> .. move this whole code down? And get rid of 'new_persistent_gnts'
> since it will always be true?
> 

Unless we fix gnttable_alloc_grant_references(0).

> But more importantly, why do we not check for 'info->persistent_gnts_c' 
> anymore?
> 

Info->persistent_gnts_c is for per-device not per-ring, the persistent grants 
may be taken by other queues/rings after we checked the value here.
Which would make get_grant() fail, so we have to reserved enough grants in 
advance.
Those new-allocated grants will be freed if there are enough grants in 
persistent list.

Will fix all other comments for this patch.

Thanks,
Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.