[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [qemu-upstream-unstable] qemu/xendisk: set maximum number of grants to be used
commit f7f8c33cd49885d69efc2e5f7f9a613d631762e2 Author: Jan Beulich <JBeulich@xxxxxxxx> Date: Wed Jun 13 10:45:07 2012 +0000 qemu/xendisk: set maximum number of grants to be used Legacy (non-pvops) gntdev drivers may require this to be done when the number of grants intended to be used simultaneously exceeds a certain driver specific default limit. upstream-commit: 64c27e5b1fdb6d94bdc0bda3b1869d7383a35c65 Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> --- hw/xen_disk.c | 14 ++++++++++++++ 1 files changed, 14 insertions(+), 0 deletions(-) diff --git a/hw/xen_disk.c b/hw/xen_disk.c index a76cd73..88544b1 100644 --- a/hw/xen_disk.c +++ b/hw/xen_disk.c @@ -534,6 +534,15 @@ static void blk_bh(void *opaque) blk_handle_requests(blkdev); } +/* + * We need to account for the grant allocations requiring contiguous + * chunks; the worst case number would be + * max_req * max_seg + (max_req - 1) * (max_seg - 1) + 1, + * but in order to keep things simple just use + * 2 * max_req * max_seg. + */ +#define MAX_GRANTS(max_req, max_seg) (2 * (max_req) * (max_seg)) + static void blk_alloc(struct XenDevice *xendev) { struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev); @@ -545,6 +554,11 @@ static void blk_alloc(struct XenDevice *xendev) if (xen_mode != XEN_EMULATE) { batch_maps = 1; } + if (xc_gnttab_set_max_grants(xendev->gnttabdev, + MAX_GRANTS(max_requests, BLKIF_MAX_SEGMENTS_PER_REQUEST)) < 0) { + xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n", + strerror(errno)); + } } static int blk_init(struct XenDevice *xendev) -- generated by git-patchbot for /home/xen/git/qemu-upstream-unstable.git _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |