[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 03/13] xen-netback: implement TX persistent grants



On Wed, Jun 03, 2015 at 05:07:59PM +0000, Joao Martins wrote:
[...]
> > 
> > How much harder would it be to ref-count inflight grants? Would that
> > simplify or perplex things? I'm just asking, not suggesting you should
> > choose ref-counting over current scheme.
> > 
> > In principle I favour simple code path over optimisation for every
> > possible corner case.
> 
> ref-counting the persistent grants would mean eliminating the check for
> EBUSY on xenvif_pgrant_new, but though it isnât that much of a simplification.
> 

Right.

> What would simplify a lot is if I grant map when we donât get a persistent_gnt
> in xenvif_pgrant_new() and add it to the tree there instead of doing on 
> xenvif_tx_check_gop.
> Since this happens only once for persistent grants (and up to ring size), I 
> believe it
> wouldn't hurt performance.
> 

Yeah. Mapping page inside xenvif_tx_check_gop doesn't sound nice.

> This way we would remove a lot of the checks in xenvif_tx_check_gop and
> hopefully leaving those parts (almost) intact mainly to be used for grant
> map/unmap case. The reason I didnât do it because I wanted to reuse the
> grant map code and thought that preference was given towards batching the
> grant maps. But it looks that it definitely makes things more complicated
> and adds more corner cases.
> 
> The same goes for the RX case where this change would remove a lot of the code
> for adding the grant maps (thus sharing a lot from the TX part) besides 
> removing the
> mixed initial grant copy + map. What do you think?
> 

I couldn't really comment until I see the code. But in principle I think
this is a step towards the right direction.

Wei.

> Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.