[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/6] xen-netback: coalesce slots before copying



On 25/03/13 11:08, Wei Liu wrote:
> This patch tries to coalesce tx requests when constructing grant copy
> structures. It enables netback to deal with situation when frontend's
> MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
> 
> It defines max_skb_slots, which is a estimation of the maximum number of slots
> a guest can send, anything bigger than that is considered malicious. Now it is
> set to 20, which should be enough to accommodate Linux (16 to 19).
> 
> Also change variable name from "frags" to "slots" in netbk_count_requests.

It it worth summarizing an (off-line) discussion I had with Wei on this
patch.

There are two regression that need to be addressed here.

1. The reduction of the number of supported ring entries (slots) per
packet (from 18 to 17).

2. The XSA-39 security fix turning "too many frags" errors from just
dropping the packet to a fatal error and disabling the VIF.

The key root cause of the problem is that the protocol is poorly
specified using a property external to the netback/netfront drivers
(i.e., MAX_SKB_FRAGS).

A secondary problem is some frontends have used a max slots per packet
value that is larger than netback as supported. e.g., the Windows GPLPV
drivers use up to 19.  These packets have always been dropped.

The first step is to properly specify the maximum slots per-packet as
part of the interface.  This should be specified as 18 (the historical
value).

The second step is to define a threshold for slots per packet, above
which the guest is considered to be malicious and the error is fatal.
20 seems a sensible value here.

The behavior of netback for packet is thus:

    1-18 slots: valid
   19-20 slots: drop and respond with an error
   21+   slots: fatal error

Note that we do not make 19-20 valid as this is a change to the protocol
and guests may end up relying on this which would then break the guests
if they migrate or start on host with a limit of 18.

A third (and future) step would be to investigate whether increasing the
slots per-packet limit is sensible.  There would then need to be a
mechanism to negotiate this limit between the front and back ends.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.