[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.2 TODO / Release Plan



On Tue, May 29, Jan Beulich wrote:

> >>> On 29.05.12 at 11:32, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> > On Mon, 2012-05-14 at 12:14 +0100, Jan Beulich wrote:
> >> 
> >> >>> On 14.05.12 at 12:26, Ian Campbell <Ian.Campbell@xxxxxxxxxx>
> >> wrote:
> >> > tools, blockers:
> >> 
> >> Adjustments needed for qdisk backend to work on non-pvops Linux.
> > 
> > Can you remind me what those are please.
> 
> "qemu/xendisk: set maximum number of grants to be used"
> (http://lists.xen.org/archives/html/xen-devel/2012-05/msg00715.html).
> 
> Unfortunately I didn't hear back from Olaf regarding the updated
> value that the supposed v2 of the patch (see the thread), which
> is at least partly due to him having further problems with the qdisk
> backend. Olaf - did you ever see gntdev allocation failures again
> after switching to the higher value?

I just did a successful installation of sles11sp2 guest on a
xen-unstable host with changeset 25427:ad348c6575b8, and with the change
below, and the second attempt I just started seems get through as well.

I'm sure I used this variant already two weeks ago, and the install
still failed. Perhaps other changes made during the last two weeks make
a difference.

I will also doublecheck how it goes without this change.

Olaf

--- a/hw/xen_disk.c
+++ b/hw/xen_disk.c
@@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
     if (xen_mode != XEN_EMULATE) {
         batch_maps = 1;
     }
+    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
+               2 * max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
+        xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
+                      strerror(errno));
 }

 static int blk_init(struct XenDevice *xendev)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.