[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] more profiling



James,

Could you please provide me some context and details of this work.
This seems related to the work we are doing in netchannel2 to reuse grants, but 
I don't think I understand what is that you are trying to do and how it is 
related.

Thanks

Renato

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
> James Harper
> Sent: Friday, February 29, 2008 5:45 AM
> To: James Harper; Andy Grover
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] more profiling
>
> > What I'd like to do is implement a compromise between my previous
> buffer
> > management approach (used lots of memory, but no allocate/grant per
> > packet) and your approach (uses minimum memory, but
> allocate/grant per
> > packet). We would maintain a pool of packets and buffers,
> and grow and
> > shrink the pool dynamically, as follows:
> > . Create a freelist of packets and buffers . When we need a
> new packet
> > or buffer, and there are none on the freelist, allocate
> them and grant
> > the buffer.
> > . When we are done with them, put them on the freelist .
> Keep a count
> > of the minimum size of the freelists. If the free list has been
> > greater than some value (32?) for some time (5 seconds?) then free
> > half of the items on the list.
> > . Maybe keep a freelist per processor too, to avoid the need for
> > spinlocks where we are running at DISPATCH_LEVEL
> >
> > I think that gives us a pretty good compromise between memory usage
> and
> > calls to allocate/grant/ungrant/free.
>
> I have implemented something like the above, a 'page pool'
> which is a list of pre-granted pages. This drops the time
> spent in TxBufferGC and SendQueuedPackets by 30-50%. A good
> start I think, although there doesn't appear to be much
> improvement in the iperf results, maybe only 20%.
>
> It's time for sleep now, but when I get a chance I'll add the
> same logic to the receive path, and clean it up so xennet can
> unload properly (currently it leaks and/or crashes on unload).
>
> James
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.