|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 4/4] xen-netfront: Add support for multiple queues
On Wed, Jan 15, 2014 at 04:23:24PM +0000, Andrew J. Bennieston wrote:
[...]
> +/* Module parameters */
> +unsigned int xennet_max_queues = 16;
> +module_param(xennet_max_queues, uint, 0644);
> +
This looks quite arbitrary as well.
> static const struct ethtool_ops xennet_ethtool_ops;
>
[...]
> +static int write_queue_xenstore_keys(struct netfront_queue *queue,
> + struct xenbus_transaction *xbt, int
> write_hierarchical)
> +{
> + /* Write the queue-specific keys into XenStore in the traditional
> + * way for a single queue, or in a queue subkeys for multiple
> + * queues.
> + */
> + struct xenbus_device *dev = queue->info->xbdev;
> + int err;
> + const char *message;
> + char *path;
> + size_t pathsize;
> +
> + /* Choose the correct place to write the keys */
> + if (write_hierarchical) {
> + pathsize = strlen(dev->nodename) + 10;
> + path = kzalloc(pathsize, GFP_KERNEL);
> + if (!path) {
> + err = -ENOMEM;
> + message = "writing ring references";
This error message doesn't sound right.
> + goto error;
> + }
> + snprintf(path, pathsize, "%s/queue-%u",
> + dev->nodename, queue->number);
> + }
> + else
> + path = (char *)dev->nodename;
Coding style. Should be surounded by {};
> +
[...]
> @@ -1740,10 +1838,17 @@ static int talk_to_netback(struct xenbus_device *dev,
> int err;
> unsigned int feature_split_evtchn;
> unsigned int i = 0;
> + unsigned int max_queues = 0;
> struct netfront_queue *queue = NULL;
>
> info->netdev->irq = 0;
>
> + /* Check if backend supports multiple queues */
> + err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
> + "multi-queue-max-queues", "%u", &max_queues);
> + if (err < 0)
> + max_queues = 1;
> +
Need to check if backend provide too big a number for frontend.
Wei.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |