[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [qemu-xen master] qcow2: Inform block layer about discard boundaries



commit ecdbead659f037dc572bba9eb1cd31a5a1a9ad9a
Author:     Eric Blake <eblake@xxxxxxxxxx>
AuthorDate: Thu Nov 17 14:13:55 2016 -0600
Commit:     Kevin Wolf <kwolf@xxxxxxxxxx>
CommitDate: Tue Nov 22 15:59:22 2016 +0100

    qcow2: Inform block layer about discard boundaries
    
    At the qcow2 layer, discard is only possible on a per-cluster
    basis; at the moment, qcow2 silently rounds any unaligned
    requests to this granularity.  However, an upcoming patch will
    fix a regression in the block layer ignoring too much of an
    unaligned discard request, by changing the block layer to
    break up a discard request at alignment boundaries; for that
    to work, the block layer must know about our limits.
    
    However, we can't go one step further by changing
    qcow2_discard_clusters() to assert that requests are always
    aligned, since that helper function is reached on paths
    outside of the block layer.
    
    CC: qemu-stable@xxxxxxxxxx
    Signed-off-by: Eric Blake <eblake@xxxxxxxxxx>
    Reviewed-by: Max Reitz <mreitz@xxxxxxxxxx>
    Signed-off-by: Kevin Wolf <kwolf@xxxxxxxxxx>
---
 block/qcow2.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/qcow2.c b/block/qcow2.c
index 6d5689a..e22f6dc 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -1206,6 +1206,7 @@ static void qcow2_refresh_limits(BlockDriverState *bs, 
Error **errp)
         bs->bl.request_alignment = BDRV_SECTOR_SIZE;
     }
     bs->bl.pwrite_zeroes_alignment = s->cluster_size;
+    bs->bl.pdiscard_alignment = s->cluster_size;
 }
 
 static int qcow2_set_key(BlockDriverState *bs, const char *key)
--
generated by git-patchbot for /home/xen/git/qemu-xen.git#master

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.