[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Interesting observation with network event notification and batching



On Mon, Jul 01, 2013 at 11:59:08PM +0800, annie li wrote:
[...]
> >>>1. SKB frag destructor series: to track life cycle of SKB frags. This is
> >>>not yet upstreamed.
> >>Are you mentioning this one 
> >>http://old-list-archives.xen.org/archives/html/xen-devel/2011-06/msg01711.html?
> >>
> >><http://old-list-archives.xen.org/archives/html/xen-devel/2011-06/msg01711.html>
> >>
> >Yes. But I believe there's been several versions posted. The link you
> >have is not the latest version.
> >
> >>>2. Mechanism to negotiate max slots frontend can use: mapping requires
> >>>backend's MAX_SKB_FRAGS >= frontend's MAX_SKB_FRAGS.
> >>>
> >>>3. Lazy flushing mechanism or persistent grants: ???
> >>I did some test with persistent grants before, it did not show
> >>better performance than grant copy. But I was using the default
> >>params of netperf, and not tried large packet size. Your results
> >>reminds me that maybe persistent grants would get similar results
> >>with larger packet size too.
> >>
> >"No better performance" -- that's because both mechanisms are copying?
> >However I presume persistent grant can scale better? From an earlier
> >email last week, I read that copying is done by the guest so that this
> >mechanism scales much better than hypervisor copying in blk's case.
> 
> The original persistent patch does memcpy in both netback and
> netfront side. I am thinking maybe the performance can become better
> if removing the memcpy from netfront.

I would say that removing copy in netback can scale better.

> Moreover, I also have a feeling that we got persistent grant
> performance based on default netperf params test, just like wei's
> hack which does not get better performance without large packets. So
> let me try some test with large packets though.
> 

Sadly enough, I found out today these sort of test seems to be quite
inconsistent. On a Intel 10G Nic the throughput is actually higher
without enforcing iperf / netperf to generate large packets.


Wei.

> Thanks
> Annie

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.