[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next V7 3/4] xen-netback: coalesce slots in TX path and fix regressions



>>> On 22.04.13 at 14:20, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> @@ -1256,11 +1394,12 @@ static unsigned xen_netbk_tx_build_gops(struct 
> xen_netbk *netbk)
>       struct sk_buff *skb;
>       int ret;
>  
> -     while (((nr_pending_reqs(netbk) + MAX_SKB_FRAGS) < MAX_PENDING_REQS) &&
> +     while ((nr_pending_reqs(netbk) + XEN_NETIF_NR_SLOTS_MIN
> +             < MAX_PENDING_REQS) &&
>               !list_empty(&netbk->net_schedule_list)) {
>               struct xenvif *vif;
>               struct xen_netif_tx_request txreq;
> -             struct xen_netif_tx_request txfrags[MAX_SKB_FRAGS];
> +             struct xen_netif_tx_request txfrags[max_skb_slots];

With max_skb_slots only having a lower limit enforced, this
basically gives the admin a way to crash the kernel without
necessarily being aware (and, considering that this would be
memory corruption, without necessarily being able to readily
connect the crash to the too high module parameter).

I was anyway of the opinion that dynamically sized stack
objects aren't really desirable to have in the kernel.

In any event, with a few tweaks netbk_count_requests() could
certainly be made not touch txp-s past XEN_NETIF_NR_SLOTS_MIN
(maybe XEN_NETIF_NR_SLOTS_MIN + 1).

Jan

>               struct page *page;
>               struct xen_netif_extra_info extras[XEN_NETIF_EXTRA_TYPE_MAX-1];
>               u16 pending_idx;



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.