[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/3] xen/blkfront: don't take local copy of a request from the ring page


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 9 Jul 2021 10:55:37 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YV6xnMEA5BfgGTyWTcsopm+KDXmcg5HCi+m92NikJRs=; b=loD+Vg5dYnfZNPbMkA5Xl2j9nVgxjSEBheSbWZw/euRtJPTdulv53bzgnf9pMOY2XMXitNod5pgak7KzqKBiUJ+aWRMz7HhomvkD36n1dYpdIVDDkQnDlZT+TiRxi3iTjRykuhOmzUBBqtYJ7Zp/mBswnV/40jLql3aX7Og88mooyWLc7UHhhWqI/lM/X9EEgq5K/cBhjToRHXfiMpYF6eLGeNl5EPZZWh4ObevqCFTGIDeFwh2isnYekUNZXTCHl0iY+9EEJmmHzvWqgK0P/61cwnB1fil2v55Ck2/mCOzjw9fhfxs6BdFNRyGxrIz6HEVQlO1zvrSjJg6MiDwaWA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ns8AdZtXsAezku1Yo+a1/lU2IqdWtS0W+Zm3dUgMoKRGO2uBABTVZo5YjKcTFpTph80nrdVsmY1Iz2IB4rJkZJ85Af17VRV1QDUsO+SuGJIFmHq5ySUvn5i39GiN7zxrp43V1CyUu5xkNro6Ojb+F5J1y6C6gjlrq+JJH9QWD2Kwi0DpsU9ey8bThloBvCgaYRxyCk1OTZ9sfoEZ9fK3yzztQpkt4hA2zj4DOvGLSUfY5FkE417ceiOyhEFwRihELqw9UOuetZWQPO0rUz8qTnbMi7N6rGVKhdEkbFkUh31HvGnscBAtl9WrHIJ/TKzsInC1O1UCXdWoZNlj2iHlDg==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, <linux-block@xxxxxxxxxxxxxxx>, <linux-kernel@xxxxxxxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Jens Axboe <axboe@xxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Fri, 09 Jul 2021 08:55:50 +0000
  • Ironport-hdrordr: A9a23:yQxhtqEyCvE6zxF4pLqFF5HXdLJyesId70hD6qkvc3Nom52j+/ xGws536faVslcssHFJo6HmBEClewKnyXcT2/htAV7CZnichILMFu9fBOTZsl/d8kHFh4tgPO JbAtRD4b7LfClHZKTBkXCF+r8bqbHtmsDY5pas854ud3ATV0gJ1XYGNu/xKDwReOApP+tcKH LKjfA32AZINE5nJPiTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1SvV Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfpWoCoZ 3pmVMNLs5z43TeciWeuh32wTTt1z4o9jvL1UKYqWGLm725eBsKT+56wa5JeBrQ7EQt+Ptm1r hQ4m6fv51LSTvdgSXG4cTSXR0CrDv7nZMbq59Rs5Vja/pdVFcIxrZvuX+9Ua1wUx4S0bpXUN WHV6rnlbJrmTrwVQGogoFtqObcF0jbUC32BXTqgfblpgS+qkoJuXfw9PZv7Evoy6hNP6Wsx9 60epiAx4s+BfP/U8pGdZA8qI2MeyrwfS4=
  • Ironport-sdr: HKveQechKVogurJDHf50iWrE9UmrzOMzk5rHVyo5XVZmHG8VCK7L2BGRp5Hitkn9VVd8rCb3Yi E2FvdVyLPaAXbBwke5irROYTHD68iGxh73Mfuj/o5n/Hu0NHnE16eDvtrzd6RalMrLb2XPqQLz QeNITddgKU4HLw20T4wrBHFm2G3IdpffZGZ+I/OR4gL3Y/3x9qgYU17/dbZ9xq31wQA/tlM3aD zTYjnemK7nmYmtDXnZiLPiEow6IdKkDP6gWzQJCsN4/FYagU6AKXHgZmDHH6ZAE3Rs0Sz1HS+4 lPI=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Jul 08, 2021 at 02:43:44PM +0200, Juergen Gross wrote:
> In order to avoid a malicious backend being able to influence the local
> copy of a request build the request locally first and then copy it to
> the ring page instead of doing it the other way round as today.
> 
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxx>

Thanks!

One unrelated comment below.

> ---
> V2:
> - init variable to avoid potential compiler warning (Jan Beulich)
> ---
>  drivers/block/xen-blkfront.c | 25 +++++++++++++++----------
>  1 file changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 86356014d35e..80701860870a 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -546,7 +546,7 @@ static unsigned long blkif_ring_get_request(struct 
> blkfront_ring_info *rinfo,
>       rinfo->shadow[id].status = REQ_WAITING;
>       rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
>  
> -     (*ring_req)->u.rw.id = id;
> +     rinfo->shadow[id].req.u.rw.id = id;
>  
>       return id;
>  }
> @@ -554,11 +554,12 @@ static unsigned long blkif_ring_get_request(struct 
> blkfront_ring_info *rinfo,
>  static int blkif_queue_discard_req(struct request *req, struct 
> blkfront_ring_info *rinfo)
>  {
>       struct blkfront_info *info = rinfo->dev_info;
> -     struct blkif_request *ring_req;
> +     struct blkif_request *ring_req, *final_ring_req;
>       unsigned long id;
>  
>       /* Fill out a communications ring structure. */
> -     id = blkif_ring_get_request(rinfo, req, &ring_req);
> +     id = blkif_ring_get_request(rinfo, req, &final_ring_req);
> +     ring_req = &rinfo->shadow[id].req;
>  
>       ring_req->operation = BLKIF_OP_DISCARD;
>       ring_req->u.discard.nr_sectors = blk_rq_sectors(req);
> @@ -569,8 +570,8 @@ static int blkif_queue_discard_req(struct request *req, 
> struct blkfront_ring_inf
>       else
>               ring_req->u.discard.flag = 0;
>  
> -     /* Keep a private copy so we can reissue requests when recovering. */
> -     rinfo->shadow[id].req = *ring_req;
> +     /* Copy the request to the ring page. */
> +     *final_ring_req = *ring_req;
>  
>       return 0;
>  }
> @@ -703,6 +704,7 @@ static int blkif_queue_rw_req(struct request *req, struct 
> blkfront_ring_info *ri
>  {
>       struct blkfront_info *info = rinfo->dev_info;
>       struct blkif_request *ring_req, *extra_ring_req = NULL;
> +     struct blkif_request *final_ring_req, *final_extra_ring_req = NULL;
>       unsigned long id, extra_id = NO_ASSOCIATED_ID;
>       bool require_extra_req = false;
>       int i;
> @@ -747,7 +749,8 @@ static int blkif_queue_rw_req(struct request *req, struct 
> blkfront_ring_info *ri
>       }
>  
>       /* Fill out a communications ring structure. */
> -     id = blkif_ring_get_request(rinfo, req, &ring_req);
> +     id = blkif_ring_get_request(rinfo, req, &final_ring_req);
> +     ring_req = &rinfo->shadow[id].req;
>  
>       num_sg = blk_rq_map_sg(req->q, req, rinfo->shadow[id].sg);
>       num_grant = 0;
> @@ -798,7 +801,9 @@ static int blkif_queue_rw_req(struct request *req, struct 
> blkfront_ring_info *ri
>               ring_req->u.rw.nr_segments = num_grant;
>               if (unlikely(require_extra_req)) {
>                       extra_id = blkif_ring_get_request(rinfo, req,
> -                                                       &extra_ring_req);
> +                                                       
> &final_extra_ring_req);
> +                     extra_ring_req = &rinfo->shadow[extra_id].req;

I'm slightly confused about this extra request stuff because I cannot
find any check that asserts we have two empty slots on the ring before
getting here (I only see a RING_FULL check in blkif_queue_rq).

This is AFAIK only used on Arm when guest page size > 4KB.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.