[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU



>   I believe your problem is due to a higher network latency 
> in Xen. Your formula to compute throughput uses the inverse 
> of round trip latency (if I understood it correctly). This 
> probably means that your application is sensitive to the 
> round trip latency. Your latency mesurements show a higher 
> value for domainU and this is the reason for the lower 
> throughput.  I am not sure but it is possible that network 
> interrupts or event notifications in the inter-domain channel 
> are being coalesced and causing longer latency. Keir, do 
> event notifications get coalesced in the inter-domain I/O 
> channel for networking?

There's no timeout-based coalescing right now, so we'll be pushing
through packets as soon as the sending party emptys its own work
queue.[*]

If you're on an SMP with the dom0 and domU's on different CPUs (and have
CPU to burn) then you might get a performance improvement by
artificially capping some of the natural batching to just a couple of
packets. You could try modifying netback's net_rx_action to send the
notification through to netfront more eagerly. This will help get the
latency down, at the cost of burning more CPU.

Ian

[*] We actually need to add some timeout-based coallescing to make true
inter-VM communication work more efficiently (i.e. two VMs on the same
node talking to each other rather than out over the network). We'll
probably need to have some heuristic to detect when we're entering a
'high bandwith regime' and only then enable the timeout-forced batching.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.