[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU



> If you're on an SMP with the dom0 and domU's on different 
> CPUs (and have CPU to burn) then you might get a performance 
> improvement by artificially capping some of the natural 
> batching to just a couple of packets. You could try modifying 
> netback's net_rx_action to send the notification through to 
> netfront more eagerly. This will help get the latency down, 
> at the cost of burning more CPU.

To be clearer, modify net_rx_action netback as follows to kick the
frontend after every packet. I expect this might help for some of the
larger message sizes. Kicking every packet may be overdoing it, so you
might want to adjust to every Nth, using the rx_notify array to store
the number of packets queued per netfront driver.

Overall, the MPI SendRecv benchmark is an absoloute worst case scenario
for s/w virtualization. Any 'optimisations' we add will be at the
expense of reduced CPU efficiency, possibly resulting in reduced
bandwidth for many users. The best soloution to this is to use a 'smart
NIC' or HCA (such as the Arsenic GigE we developed) that can deliver
packets directly to VMs. I expect we'll see a number of such NICs on the
market before too long, and they'll be great for Xen.

Ian

        evtchn = netif->evtchn;
        id =
netif->rx->ring[MASK_NETIF_RX_IDX(netif->rx_resp_prod)].req.id;
        if ( make_rx_response(netif, id, status, mdata, size) &&
             (rx_notify[evtchn] == 0) )
        {
-            rx_notify[evtchn] = 1;
-            notify_list[notify_nr++] = evtchn;
+            notify_via_evtchn(evtchn);
        }

        netif_put(netif);
        dev_kfree_skb(skb);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.