[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 03/13] xen-netback: implement TX persistent grants



On 02 Jun 2015, at 16:53, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:

> On Fri, May 22, 2015 at 10:24:39AM +0000, Joao Martins wrote:
>> 
>> On 19 May 2015, at 17:23, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
>>> On Tue, May 12, 2015 at 07:18:27PM +0200, Joao Martins wrote:
>>>> Introduces persistent grants for TX path which follows similar code path
>>>> as the grant mapping.
>>>> 
>>>> It starts by checking if there's a persistent grant available for header
>>>> and frags grefs and if so setting it in tx_pgrants. If no persistent grant
>>>> is found in the tree for the header it will resort to grant copy (but
>>>> preparing the map ops and add them laster). For the frags it will use the
>>>                                    ^
>>>                                    later
>>> 
>>>> tree page pool, and in case of no pages it fallbacks to grant map/unmap
>>>> using mmap_pages. When skb destructor callback gets called we release the
>>>> slot and persistent grant within the callback to avoid waking up the
>>>> dealloc thread. As long as there are no unmaps to done the dealloc thread
>>>> will remain inactive.
>>>> 
>>> 
>>> This scheme looks complicated. Can we just only use one
>>> scheme at a time? What's the rationale for using this combined scheme?
>>> Maybe you're thinking about using a max_grants < ring_size to save
>>> memory?
>> 
>> Yes, my purpose was to allow a max_grants < ring_size to save amount of
>> memory mapped. I did a bulk transfer test with iperf and the max amount of
>> grants in tree was <160 TX gnts, without affecting the max performance;
>> tough using pktgen fills the tree completely.
>> The second reason is to handle the case for a (malicious?) frontend providing
>> more grefs than the max allowed in which I would fallback to grant map/unmap.
>> 
> 
> This is indeed a valid concern. The only method is to expires oldest
> grant when that happens -- but this is just complexity in another place,
> not really simplifying anything.
> 
>>> 
>>> Only skim the patch. I will do detailed reviews after we're sure this is
>>> the right way to go.
>>> 
> [...]
>>> 
>>> Under what circumstance can we retrieve a already in use persistent
>>> grant? You seem to suggest this is a bug in RX case.
>> 
>> A guest could share try to share the same mapped page in multiple frags,
>> in which case I fallback to map/unmap. I think this is a limitation in
>> the way we manage the persistent gnts where we can only have a single
>> reference of a persistent grant inflight.
>> 
> 
> How much harder would it be to ref-count inflight grants? Would that
> simplify or perplex things? I'm just asking, not suggesting you should
> choose ref-counting over current scheme.
> 
> In principle I favour simple code path over optimisation for every
> possible corner case.

ref-counting the persistent grants would mean eliminating the check for
EBUSY on xenvif_pgrant_new, but though it isn’t that much of a simplification.

What would simplify a lot is if I grant map when we don’t get a persistent_gnt
in xenvif_pgrant_new() and add it to the tree there instead of doing on 
xenvif_tx_check_gop.
Since this happens only once for persistent grants (and up to ring size), I 
believe it
wouldn't hurt performance.

This way we would remove a lot of the checks in xenvif_tx_check_gop and
hopefully leaving those parts (almost) intact mainly to be used for grant
map/unmap case. The reason I didn’t do it because I wanted to reuse the
grant map code and thought that preference was given towards batching the
grant maps. But it looks that it definitely makes things more complicated
and adds more corner cases.

The same goes for the RX case where this change would remove a lot of the code
for adding the grant maps (thus sharing a lot from the TX part) besides 
removing the
mixed initial grant copy + map. What do you think?

Joao


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.