[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/9] xen/blkfront: separate per ring information out of device info



El 10/10/15 a les 10.30, Bob Liu ha escrit:
> 
> On 10/03/2015 01:02 AM, Roger Pau Monné wrote:
>> El 05/09/15 a les 14.39, Bob Liu ha escrit:
>>> Split per ring information to an new structure:blkfront_ring_info, also 
>>> rename
>>> per blkfront_info to blkfront_dev_info.
>>   ^ removed.
>>>
>>> A ring is the representation of a hardware queue, every vbd device can 
>>> associate
>>> with one or more blkfront_ring_info depending on how many hardware
>>> queues/rings to be used.
>>>
>>> This patch is a preparation for supporting real multi hardware queues/rings.
>>>
>>> Signed-off-by: Arianna Avanzini <avanzini.arianna@xxxxxxxxx>
>>> Signed-off-by: Bob Liu <bob.liu@xxxxxxxxxx>
>>> ---
>>>  drivers/block/xen-blkfront.c |  854 
>>> ++++++++++++++++++++++--------------------
>>>  1 file changed, 445 insertions(+), 409 deletions(-)
>>>
>>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>>> index 5dd591d..bf416d5 100644
>>> --- a/drivers/block/xen-blkfront.c
>>> +++ b/drivers/block/xen-blkfront.c
>>> @@ -107,7 +107,7 @@ static unsigned int xen_blkif_max_ring_order;
>>>  module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 
>>> S_IRUGO);
>>>  MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used 
>>> for the shared ring");
>>>  
>>> -#define BLK_RING_SIZE(info) __CONST_RING_SIZE(blkif, PAGE_SIZE * 
>>> (info)->nr_ring_pages)
>>> +#define BLK_RING_SIZE(dinfo) __CONST_RING_SIZE(blkif, PAGE_SIZE * 
>>> (dinfo)->nr_ring_pages)
>>
>> This change looks pointless, any reason to use dinfo instead of info?
>>
>>>  #define BLK_MAX_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE * 
>>> XENBUS_MAX_RING_PAGES)
>>>  /*
>>>   * ring-ref%i i=(-1UL) would take 11 characters + 'ring-ref' is 8, so 19
>>> @@ -116,12 +116,31 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order 
>>> of pages to be used for the
>>>  #define RINGREF_NAME_LEN (20)
>>>  
>>>  /*
>>> + *  Per-ring info.
>>> + *  Every blkfront device can associate with one or more 
>>> blkfront_ring_info,
>>> + *  depending on how many hardware queues to be used.
>>> + */
>>> +struct blkfront_ring_info
>>> +{
>>> +   struct blkif_front_ring ring;
>>> +   unsigned int ring_ref[XENBUS_MAX_RING_PAGES];
>>> +   unsigned int evtchn, irq;
>>> +   struct work_struct work;
>>> +   struct gnttab_free_callback callback;
>>> +   struct blk_shadow shadow[BLK_MAX_RING_SIZE];
>>> +   struct list_head grants;
>>> +   struct list_head indirect_pages;
>>> +   unsigned int persistent_gnts_c;
>>
>> persistent grants should be per-device, not per-queue IMHO. Is it really
>> hard to make this global instead of per-queue?
>>
> 
> I didn't see the benefit of making it per-device, but disadvantages instead:
> If persistent grants are per-device, then we have to introduce an extra lock 
> to protect this list.
> Which will complicate the code and may slow down the performance when the 
> queue number is large e.g 16 queues.

IMHO, and as I said in the reply to patch 7, there's no way to know that
unless you actually implement it, and I think it was easier to just add
locks around existing functions without moving the data structures
(leaving them per-device).

Also, you didn't want to enable multiple queues by default because of
the RAM usage, if we make all this per-device RAM usage is not going to
be increased much, which will mean we could enable multiple queues by
default with a sensible value (4 maybe?). TBH, I don't think we are
going to see contention with 4 queues per device.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.