[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 05/10] xen/blkfront: negotiate number of queues/rings to be used with backend



>  /* Common code used when first setting up, and when resuming. */
>  static int talk_to_blkback(struct xenbus_device *dev,
> @@ -1527,10 +1582,9 @@ static int talk_to_blkback(struct xenbus_device *dev,
>  {
>       const char *message = NULL;
>       struct xenbus_transaction xbt;
> -     int err, i;
> -     unsigned int max_page_order = 0;
> +     int err;
> +     unsigned int i, max_page_order = 0;
>       unsigned int ring_page_order = 0;
> -     struct blkfront_ring_info *rinfo;

Why? You end up doing the 'struct blkfront_ring_info' decleration
in two of the loops below?
>  
>       err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
>                          "max-ring-page-order", "%u", &max_page_order);
> @@ -1542,7 +1596,8 @@ static int talk_to_blkback(struct xenbus_device *dev,
>       }
>  
>       for (i = 0; i < info->nr_rings; i++) {
> -             rinfo = &info->rinfo[i];
> +             struct blkfront_ring_info *rinfo = &info->rinfo[i];
> +

Here..

> @@ -1617,7 +1677,7 @@ again:
>  
>       for (i = 0; i < info->nr_rings; i++) {
>               int j;
> -             rinfo = &info->rinfo[i];
> +             struct blkfront_ring_info *rinfo = &info->rinfo[i];

And here?

It is not a big deal but I am curious of why add this change?

> @@ -1717,7 +1789,6 @@ static int blkfront_probe(struct xenbus_device *dev,
>  
>       mutex_init(&info->mutex);
>       spin_lock_init(&info->dev_lock);
> -     info->xbdev = dev;

That looks like a spurious change? Ah, I see that we do the same exact
operation earlier in the blkfront_probe.

Let me take this out of this patch and spin it as a seperate patch.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.