[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH net-next 2/2] xen-netfront: re-order error checks in xennet_get_responses()


  • To: "netdev@xxxxxxxxxxxxxxx" <netdev@xxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 13 Jul 2022 11:19:55 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WEzlgyoqktpBUwg5qI3Cs3eLw+k3O3PDbgsVFU2rvWw=; b=VgveumqRArUODB/DBd1QI/dmJJczlsXRsvuDwkKS+1kFaqHx6uEXeFtsU4cnW50o/oD8mEYExued+AXLlBAtQjP5kq+sQvpQYvrvq4Q/HDBQZiY8hjQ/Y6rIDNVA7IKLEx20vfethYT5n0ttCQZxV8o6/1wTeuIIabSSa+FqT2Ygg9nQMv3WHge+z6TXAthgTDv+T3d4x+NJMgnLpFdlLgdwQjZj3YnCzy/i3u7aeSzOvkcw3tMN38kMB5bbqqMZDqa5Pu1SxU4o2y2Q9u1RQeEjKcu2Ur2kX8IwY3sewRmPdRNFOK7n4OW8E1a+bipNcGFpCInMRxPlqvq9ZD3HwA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LfqdRDT7Dxfb6znB7xA8Q0dnBLYEs5/Z24Wzr5CH2cpLNzUkPY3kKy+/IH7Qcetr7yq8slPOX10ZSQ962GKcQ7qCqhlyoK3E7NrJBsnM8KR7QjO5ZRm/H4eYDjw53dk2y197Z2bz4OX7uT7heXuqI3Y/+WBpr9uS4Go57Ch5X99wqJ+g23SshoTynoYzD/Zr5bPELvqIeymK507NreiHCzf9W1xsh9dUbGALv42iz7H2YtX6Mf05mNuY9XCKWxGR6EZil72qWk8SOZ7g9DpYrdfJ0ql3GgxNMXc6h+l2L7ws37seAJMD0/7IyymYWcp/xvV7vEMq5hZWKr4tyRN6KA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <stefano@xxxxxxxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
  • Delivery-date: Wed, 13 Jul 2022 09:20:01 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Check the retrieved grant reference first; there's no point trying to
have xennet_move_rx_slot() move invalid data (and further defer
recognition of the issue, likely making diagnosis yet more difficult).

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
I question the log message claiming a bad ID (which is how I read its
wording): rx->id isn't involved in determining ref. I don't see what
else to usefully log, though, yet making the message just "Bad rx
response" also doesn't look very useful.

--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1043,16 +1043,6 @@ static int xennet_get_responses(struct n
        }
 
        for (;;) {
-               if (unlikely(rx->status < 0 ||
-                            rx->offset + rx->status > XEN_PAGE_SIZE)) {
-                       if (net_ratelimit())
-                               dev_warn(dev, "rx->offset: %u, size: %d\n",
-                                        rx->offset, rx->status);
-                       xennet_move_rx_slot(queue, skb, ref);
-                       err = -EINVAL;
-                       goto next;
-               }
-
                /*
                 * This definitely indicates a bug, either in this driver or in
                 * the backend driver. In future this should flag the bad
@@ -1065,6 +1055,16 @@ static int xennet_get_responses(struct n
                        err = -EINVAL;
                        goto next;
                }
+
+               if (unlikely(rx->status < 0 ||
+                            rx->offset + rx->status > XEN_PAGE_SIZE)) {
+                       if (net_ratelimit())
+                               dev_warn(dev, "rx->offset: %u, size: %d\n",
+                                        rx->offset, rx->status);
+                       xennet_move_rx_slot(queue, skb, ref);
+                       err = -EINVAL;
+                       goto next;
+               }
 
                if (!gnttab_end_foreign_access_ref(ref)) {
                        dev_alert(dev,




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.