[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Request for help, implementing a new network scheduler
Hi, On 03/02/15 19:55, ronald pina wrote: Thanks Zoli for your clear and very helpfull answer, my idea is to prioritize traffic that comes from different guest, for example if guest_1 is used for a voip server than it would be reasonable to schedule first the vif guest_1 before other guests. The diagram atached below explains better my idea. The goal is to improve the latency and jitter for real-time traffic which have critical requirements to QoS especially on heavy network i/o hosts, and if the vifs of different guest are scheduled on round robin manner in dom0 backend than we can improve this scheduling algorithm like using weighted round robin which can give more scheduling time to a vif that was used by a voip-server guest. I would like to know that which module or function is responsible for scheduling vif on dom0 ? As I said, it is a NAPI polling function: http://www.linuxfoundation.org/collaborate/workgroups/networking/napiSo it is scheduled by NAPI, which doesn't have a clue about what kind of packets the device dealing with, therefore can't make a decision based on that. You can try manually configure the weight of a device through netif_napi_add(), but I strongly recommend to look for other ways I mentioned in my previous letter. Thanks Ronald On Tue, Feb 3, 2015 at 12:54 PM, Zoltan Kiss <zoltan.kiss@xxxxxxxxxx <mailto:zoltan.kiss@xxxxxxxxxx>> wrote: On 03/02/15 00:42, Ronald PIna wrote: Hi I am working for the msc thesis to improve the performance on network for guest domains that uses real-time services like voip or video streaming servers , i have an idea to implement a network scheduler on network backend, the schedulers may be weighted fair queuing or weighted round robin, the idea is to schedule first the packet coming from real-time guest services, one of those schedulers could make the job and can prioritize the network traffic. As far as i have studied from previous works explain that the outgoing network traffic in xen is scheduled in round robin manner inside the function net_tx_action(). Later on the last version of xen this function has changed xenvif_tx_action() and have changed his primarily structure. My primarily goal is to modify the round robin to more advanced scheduler which introduce priorities. The clear question is that where is located the function which schedule in round robin and if possible are there any concern about using another network scheduling method in netback ? xenvif_tx_action is called from NAPI polling context, per queue. It copies & maps the packets from the guest to Dom0 (at which point it doesn't have the slightest idea about the packet content), formats them in xenvif_tx_submit to be a well formed skb, and calls netif_receive_skb to hand it over to the stack. You could try doing some reordering in xenvif_tx_submit or introducing priorities in the ring buffer, but I would strongly discourage you to do that, for similar reasons mentioned by Ian. You would be better off using standard Linux tools for doing that, see 'man tc', and I would try on the sending side, in the guest, before netfront gets the packet. Or, if your idea is to prioritize between traffic coming from separate guests, then xenvif_tx_action is definitely the wrong place to do that: as said above, it deals only with the traffic of one particular queue of a netback device. You might want to look into the entity switching/routing between the guests and NICs: Linux bridge module, openvswitch, the IP routing engine of the backend etc. I think the first two could be the reasonable choices, bridge might lacks such functionality though. Zoli Thanks in advance _________________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx <mailto:Xen-devel@xxxxxxxxxxxxx> http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |