[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH] Virtual machine queue NIC support in control panel


  • To: "Zhao, Yu" <yu.zhao@xxxxxxxxx>
  • From: "Santos, Jose Renato G" <joserenato.santos@xxxxxx>
  • Date: Mon, 4 Feb 2008 23:30:58 +0000
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 04 Feb 2008 15:32:10 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Achj6a1mWdtC/MX0R9qBYCYBhdWPIQASrclgAH68QNAAUgkHMA==
  • Thread-topic: [PATCH] Virtual machine queue NIC support in control panel


> -----Original Message-----
> From: Zhao, Yu [mailto:yu.zhao@xxxxxxxxx]
> Sent: Saturday, February 02, 2008 11:07 PM
> To: Santos, Jose Renato G
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Keir.Fraser@xxxxxxxxxxxx
> Subject: RE: [PATCH] Virtual machine queue NIC support in
> control panel
>
> Renato,
>
> Thanks for your comments.
> "vmq-attach/detach" are intended to associate a device queue
> to a vif when this vif is running. These two "xm" options
> require a physical NIC name and a vif reference, so they can
> invoke low level utility to do the real work. If the physical
> NIC doesn't have any available queue, the low level utility
> is supposed to return an error, thus "xm vmq-attach" will
> report the failure.
>
> Using "accel" plug-in framework to do this is a decent
> solution. However, "accel" plug-in lacks dynamic association
> function, which means user cannot set up or change a
> accelerator for a VIF when this VIF is running (as I
> mentioned in another email to Kieran Mansley). If we can
> improve "accel" plug-in to support this and some other
> features that may be required by other acceleration
> technologies, "vmq" and other coming acceleration options can
> converge.
>
> Any other comments or suggestions, please feel free to let me
> know. I'm trying to revise this patch to use "accel" and will
> send it out later.
>

  I think we need to consider two use cases:
1) The system automatically controls the allocation of device queues:
   In this mode some policy in netback will allocate vifs to device
   queues. Initially this can be a very simple policy that just
   allocates device queues on a first come first served basis until
   all the queues are used and after that new vifs are mapped to a
   default shared queue.
   Over time we can have a more dynamic scheme which can use traffic
   measurements to change queue assignments dynamically.

2) The user control the allocation of device queues:
   In this mode the user specify that a particular vif should
   use a dedicated device queue. In this case the user will
   expect that the command either allocates a queue to that vif
   or fail if it cannot do that. He will also expect that the
   system do not dynamically change the assignment of that
   queue to a different vif. Basically, this will pin
   a device queue to a particular vif and avoid this queue
   to be used by any other vif.

I think we need to support both cases. Probably we should assume
case 1 by default and switch to case 2 on explict user config
option or command. It probably makes sense to start with
only case 1 first and add support for case 2 later.

For case 1, the configuration parameter or command just needs to bind
a vif to a physical device and let the system choose if the
vif will use a dedicated queue or a shared queue. In this
case I think we can share the same parameter with the Solarflare
acelerator plugin framework, since all we are doing is binding a
vif to physical device.

For case 2 I am not sure if it is a good
idea to share the same framework with Solarflare accelerator
plugin. Since these are two different mechanisms it seems good
to expose them with different commands or config options.
This is really a phylosophical question: Should the user
be able to distinguish between pinning a vif to device queue
vs. a device context; or should these be hidden under
a higher level abstraction command? What if the same
device can support both the multi-queue model and the direct I/O
model?

Anyway, we need to be clear about the meaning of each
command or parameter. Are they just binding a physical device
and letting the system automaticaly choose the allocation of
dedicated device queues or we allowing the user to directly
assign queues to vifs. Please make this clear on your
command and parameters.

Also, we will probably need some command to list the status of a vif
(i.e. using a dedicated queue or a shared queue)

Thanks

Regards

Renato

> Regards,
> Yu
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.