[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH] net: xen-netback: convert to hw_features

On Tue, Apr 19, 2011 at 04:15:48PM +0100, Ian Campbell wrote:
> On Tue, 2011-04-19 at 14:43 +0100, Michał Mirosław wrote:
> > On Tue, Apr 19, 2011 at 02:39:00PM +0100, Ian Campbell wrote:
> > > On Tue, 2011-04-19 at 14:30 +0100, Michał Mirosław wrote:
> > > > On Tue, Apr 19, 2011 at 02:17:53PM +0100, Ian Campbell wrote:
> > > > > I fixed it with the following, I also moved the !can_sg MTU clamping
> > > > > into a set_features hook (like we do with netfront). Am I right that
> > > > > this pattern copes with changes to SG via ethtool etc better? I think
> > > > > it's more future proof in any case.
> > > > This looks wrong. Even if SG is turned on, you might get big skbs which
> > > > are linearized. There is a difference in SG capability and SG offload
> > > > status and as I see it the capability is what you need to test for MTU.
> > > So the existing stuff in drivers/net/xen-netfront.c is wrong too?
> > Looks like it. But I don't really know what are the real constraints for 
> > MTU.
> > What I know is that SG even if turned on needs not be used (and currently
> > it's not e.g. if checksum offload is disabled).
> The interesting case is the opposite one, isn't it? IOW if NETIF_F_SG is
> disabled but the frontend/backend agree that they have the capability to
> handle >PAGE_SIZE skbs

Then the driver might get bigger skbs but they won't ever be fragmented.

> In my experience, the normal reason for disabling the NETIF_F_SG offload
> status is that the underlying capability is somehow buggy, otherwise is
> there any reason to turn it off?

Some features depend on others to function or on some hardware/software state.
Though in most cases the reason is the one you wrote (capability also includes
what driver has implemented).

Best Regards,
Michał Mirosław

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.