[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] Avoid repeated memory allocation in xen_disk



> -----Original Message-----
> From: Tim Smith [mailto:tim.smith@xxxxxxxxxx]
> Sent: 02 November 2018 10:01
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx; qemu-devel@xxxxxxxxxx; qemu-
> block@xxxxxxxxxx
> Cc: Anthony Perard <anthony.perard@xxxxxxxxxx>; Kevin Wolf
> <kwolf@xxxxxxxxxx>; Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Stefano
> Stabellini <sstabellini@xxxxxxxxxx>; Max Reitz <mreitz@xxxxxxxxxx>
> Subject: [PATCH 3/3] Avoid repeated memory allocation in xen_disk
> 
> xen_disk currently allocates memory to hold the data for each ioreq
> as that ioreq is used, and frees it afterwards. Because it requires
> page-aligned blocks, this interacts poorly with non-page-aligned
> allocations and balloons the heap.
> 
> Instead, allocate the maximum possible requirement, which is
> BLKIF_MAX_SEGMENTS_PER_REQUEST pages (currently 11 pages) when
> the ioreq is created, and keep that allocation until it is destroyed.
> Since the ioreqs themselves are re-used via a free list, this
> should actually improve memory usage.
> 
> Signed-off-by: Tim Smith <tim.smith@xxxxxxxxxx>

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> ---
>  hw/block/xen_disk.c |   11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
> index b506e23868..faaeefba29 100644
> --- a/hw/block/xen_disk.c
> +++ b/hw/block/xen_disk.c
> @@ -112,7 +112,6 @@ static void ioreq_reset(struct ioreq *ioreq)
>      memset(&ioreq->req, 0, sizeof(ioreq->req));
>      ioreq->status = 0;
>      ioreq->start = 0;
> -    ioreq->buf = NULL;
>      ioreq->size = 0;
>      ioreq->presync = 0;
> 
> @@ -137,6 +136,11 @@ static struct ioreq *ioreq_start(struct XenBlkDev
> *blkdev)
>          /* allocate new struct */
>          ioreq = g_malloc0(sizeof(*ioreq));
>          ioreq->blkdev = blkdev;
> +        /* We cannot need more pages per ioreq than this, and we do re-
> use
> +         * ioreqs, so allocate the memory once here, to be freed in
> +         * blk_free() when the ioreq is freed. */
> +        ioreq->buf = qemu_memalign(XC_PAGE_SIZE,
> BLKIF_MAX_SEGMENTS_PER_REQUEST
> +                                   * XC_PAGE_SIZE);
>          blkdev->requests_total++;
>          qemu_iovec_init(&ioreq->v, 1);
>      } else {
> @@ -313,14 +317,12 @@ static void qemu_aio_complete(void *opaque, int ret)
>          if (ret == 0) {
>              ioreq_grant_copy(ioreq);
>          }
> -        qemu_vfree(ioreq->buf);
>          break;
>      case BLKIF_OP_WRITE:
>      case BLKIF_OP_FLUSH_DISKCACHE:
>          if (!ioreq->req.nr_segments) {
>              break;
>          }
> -        qemu_vfree(ioreq->buf);
>          break;
>      default:
>          break;
> @@ -392,12 +394,10 @@ static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
>  {
>      struct XenBlkDev *blkdev = ioreq->blkdev;
> 
> -    ioreq->buf = qemu_memalign(XC_PAGE_SIZE, ioreq->size);
>      if (ioreq->req.nr_segments &&
>          (ioreq->req.operation == BLKIF_OP_WRITE ||
>           ioreq->req.operation == BLKIF_OP_FLUSH_DISKCACHE) &&
>          ioreq_grant_copy(ioreq)) {
> -        qemu_vfree(ioreq->buf);
>          goto err;
>      }
> 
> @@ -990,6 +990,7 @@ static int blk_free(struct XenDevice *xendev)
>          ioreq = QLIST_FIRST(&blkdev->freelist);
>          QLIST_REMOVE(ioreq, list);
>          qemu_iovec_destroy(&ioreq->v);
> +        qemu_vfree(ioreq->buf);
>          g_free(ioreq);
>      }
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.