[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH 1/2] Fix ixgbe RSS operation



Mitch

The patches were mangled by the email program (not sure if in my end or yours).
I could fix patch 1/2 as there were just one or two line breaks missing that I 
could find easily. 
There were many corrupted lines for patch 2/2 and I was not able to manualy fix 
them. 
Anyway patch 2/2 is really not needed for testing as it is only changing the 
CONFIG option for VMDQ.
It may be a good idea to send the patches as attachments in the future.

After applying patch 1/2 I continue to see the same kernel oops as before
Looking at the patch it seems that you added a check for 
IXGBE_FLAG_VMDQ_ENABLED before calling netchannel2 vmq functions.
However we are using VMDQ mode so this additional check is true and the code 
behavior is exactly the same as before. This explains why I continue seeing the 
same problem.

The real problem is that more than 1 queue is being used in dom0 before any 
guest is created. I imagine this means that RSS is enabled. The code does not 
seem to support RSS and VMDQ enabled at the same time, since it decides if a 
queue is a guest queue by checking if  queue index > 0. Obviously if there are 
more than one queue for dom0 then this condition will be true for a dom0 queue 
and the code will mistakenly assume it is a guest queue. 
 
I added the printf below and confirmed that in addition to queue index 0, 
ixgbe_alloc_rx_buffers() is also being called for queue index 1, and that is 
when the kernel crashes. So it seems that RSS is in fact enabled. I am not 
explictly enabling RSS but probably it is being enabled by default. I think the 
easy solution for now is to disable RSS when VMDQ is enabled. Is this something 
easy to fix? Is there a driver parameter that I can set to explictly disable 
RSS while you work in the fix?

Also I suspect there is some problem with your Xen configuration, since you are 
not being able to reproduce the problem. If you send me the output of your "xm 
dmesg" and "dmesg" I can try to find if there is any indication of a problem 
with your environment that is preventing you to run in VMDQ mode and thus 
preventing you from seeing the problem.

Thanks

Renato

diff -r 76868b6173ff drivers/net/ixgbe/ixgbe_main.c
--- a/drivers/net/ixgbe/ixgbe_main.c    Tue Feb 10 16:56:42 2009 -0800
+++ b/drivers/net/ixgbe/ixgbe_main.c    Tue Feb 10 18:30:37 2009 -0800
@@ -568,6 +568,7 @@ static void ixgbe_alloc_rx_buffers(struc
                if (!bi->skb) {
                        struct sk_buff *skb;
 #ifdef CONFIG_XEN_NETDEV2_BACKEND
+                       printk("QUEUE INDEX = %d\n", rx_ring->queue_index);
                        if ((adapter->flags & IXGBE_FLAG_VMDQ_ENABLED) &&
                            rx_ring->queue_index) {
                                skb = vmq_alloc_skb(adapter->netdev,



> -----Original Message-----
> From: cluemerchant@xxxxxxxxx [mailto:cluemerchant@xxxxxxxxx] 
> On Behalf Of Mitch Williams
> Sent: Tuesday, February 10, 2009 3:10 PM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Santos, Jose Renato G; steven.smith@xxxxxxxxxxxxx
> Subject: [PATCH 1/2] Fix ixgbe RSS operation
> 
> The addition of VMDq support to ixgbe completely broke normal 
> RSS receive operation.  Since RSS is the default operating 
> mode, the driver would cause a kernel panic as soon as the 
> interface was opened.
>  This patch fixes the problem by correctly checking the 
> VMDQ_ENABLED flag before attempting any VMDQ-specific call.
> 
> Signed-off-by: Mitch Williams <mitch.a.williams@xxxxxxxxx>
> 
> diff -r fcc044a90d40 drivers/net/ixgbe/ixgbe_main.c
> --- a/drivers/net/ixgbe/ixgbe_main.c  Thu Jan 29 10:46:35 2009 +0000
> +++ b/drivers/net/ixgbe/ixgbe_main.c  Tue Feb 10 10:13:32 2009 -0800
> @@ -432,15 +432,11 @@
>       u16 tag = le16_to_cpu(rx_desc->wb.upper.vlan);
> 
>  #ifdef CONFIG_XEN_NETDEV2_BACKEND
> -     if(ring->queue_index) {
> +     if ((adapter->flags & IXGBE_FLAG_VMDQ_ENABLED) && 
> ring->queue_index) {
>               /* This is a VMDq packet destined for a VM. */
>               vmq_netif_rx(skb, ring->queue_index);
>               return;
>       }
> -     else {
> -             netif_rx(skb);
> -             return;
> -     }
>  #endif
>  #ifndef IXGBE_NO_INET_LRO
>       if (adapter->netdev->features & NETIF_F_LRO && @@ 
> -524,7 +520,8 @@
>       adapter->hw_csum_rx_good++;
> 
>  #ifdef CONFIG_XEN_NETDEV2_BACKEND
> -     skb->proto_data_valid = 1;
> +     if (adapter->flags & IXGBE_FLAG_VMDQ_ENABLED)
> +             skb->proto_data_valid = 1;
>  #endif
>  }
> 
> @@ -1060,9 +1057,8 @@
>               skb = rx_buffer_info->skb;
>               rx_buffer_info->skb = NULL;
>  #ifdef CONFIG_XEN_NETDEV2_BACKEND
> -             if(!rx_ring->queue_index || 
> !skb_shinfo(skb)->nr_frags) {
> -                     prefetch(skb->data - NET_IP_ALIGN);
> -             } else {
> +             if ((adapter->flags & IXGBE_FLAG_VMDQ_ENABLED) &&
> +                  rx_ring->queue_index) {
>                       /* for Xen VMDq, packet data goes in 
> first page of
>                        * skb, instead of data.
>                        */
> @@ -1071,6 +1067,8 @@
>                                      PAGE_SIZE, PCI_DMA_FROMDEVICE);
>                       skb->len += len;
>                       skb_shinfo(skb)->frags[0].size = len;
> +             } else {
> +                     prefetch(skb->data - NET_IP_ALIGN);
>               }
>  #else
>               prefetch(skb->data - NET_IP_ALIGN);
> @@ -1135,7 +1133,7 @@
>               total_rx_bytes += skb->len;
>               total_rx_packets++;
>  #ifdef CONFIG_XEN_NETDEV2_BACKEND
> -             if(!rx_ring->queue_index)
> +             if (skb->data)
>  #endif
>                       skb->protocol = eth_type_trans(skb, 
> adapter->netdev);
> 
> @@ -2907,7 +2905,8 @@
>               rx_buffer_info = &rx_ring->rx_buffer_info[i];
>               if (rx_buffer_info->skb) {
>  #ifdef CONFIG_XEN_NETDEV2_BACKEND
> -                     if (rx_ring->queue_index) {
> +                     if ((adapter->flags & 
> IXGBE_FLAG_VMDQ_ENABLED) &&
> +                          rx_ring->queue_index) {
>                               pci_unmap_page(pdev, 
> rx_buffer_info->dma,
>                                              PAGE_SIZE,
>                                              
> PCI_DMA_FROMDEVICE); diff -r fcc044a90d40 
> drivers/net/ixgbe/ixgbe_param.c
> --- a/drivers/net/ixgbe/ixgbe_param.c Thu Jan 29 10:46:35 2009 +0000
> +++ b/drivers/net/ixgbe/ixgbe_param.c Tue Feb 10 10:13:32 2009 -0800
> @@ -724,8 +724,9 @@
>               }
>  #endif
>  #ifdef CONFIG_XEN_NETDEV2_BACKEND
> -     if (adapter->flags &
> -         (IXGBE_FLAG_RX_PS_CAPABLE | IXGBE_FLAG_VMDQ_ENABLED)) {
> +     if ((adapter->flags &
> +           (IXGBE_FLAG_RX_PS_CAPABLE | IXGBE_FLAG_VMDQ_ENABLED)) ==
> +           (IXGBE_FLAG_RX_PS_CAPABLE | IXGBE_FLAG_VMDQ_ENABLED)) {
>               printk(KERN_INFO "ixgbe: packet split disabled 
> for Xen VMDQ\n");
>               adapter->flags &= ~IXGBE_FLAG_RX_PS_CAPABLE;
>       }
> 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.