[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Xen 4.2 TODO / Release Plan

Plan for a 4.2 release:

The time line is as follows:

19 March        -- TODO list locked down
2 April         -- Feature Freeze
30 July         -- First release candidate
Weekly          -- RCN+1 until release          << WE ARE HERE

A handful of issues identified by the test day last week are included,
thanks to all who took part.

The updated TODO list follows.

hypervisor, blockers:

    * None

tools, blockers:

    * libxl stable API -- we would like 4.2 to define a stable API
      which downstream's can start to rely on not changing. Aspects of
      this are:

        * None known

    * xl compatibility with xm:

        * No known issues

    * [CHECK] More formally deprecate xm/xend. Manpage patches already
      in tree. Needs release noting and communication around -rc1 to
      remind people to test xl.

    * [CHECK] Confirm that migration from Xen 4.1 -> 4.2 works.

    * Bump library SONAMES as necessary.

    * [BUG] qemu-traditional has 50% cpu utilization on an idle
      Windows system if USB is enabled. Not 100% clear whether this is
      Xen or qemu.  George Dunlap is performing initial

hypervisor, nice to have:

    * [BUG(?)] Under certain conditions, the p2m_pod_sweep code will
      stop halfway through searching, causing a guest to crash even if
      there was zeroed memory available.  This is NOT a regression
      from 4.1, and is a very rare case, so probably shouldn't be a
      blocker.  (In fact, I'd be open to the idea that it should wait
      until after the release to get more testing.)
            (George Dunlap)

    * S3 regression(s?) reported by Ben Guthro (Ben & Jan Beulich)

    * fix high change rate to CMOS RTC periodic interrupt causing
      guest wall clock time to lag (possible fix outlined, needs to be
      put in patch form and thoroughly reviewed/tested for unwanted
      side effects, Jan Beulich)

tools, nice to have:

    * xl compatibility with xm:

        * the parameter io and irq in domU config files are not
          evaluated by xl.  So it is not possible to passthrough a
          parallel port for my printer to domU when I start the domU
          with xl command. (reported by Dieter Bloms,

    * xl.cfg(5) documentation patch for qemu-upstream
      videoram/videomem support:
      qemu-upstream doesn't support specifying videomem size for the
      HVM guest cirrus/stdvga.  (but this works with
      qemu-xen-traditional). (Pasi KÃrkkÃinen)

    * [BUG] long stop during the guest boot process with qcow image,
      reported by Intel: http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1821

    * [BUG] vcpu-set doesn't take effect on guest, reported by Intel:

    * Load blktap driver from xencommons initscript if available, thread at:
      <db614e92faf743e20b3f.1337096977@kodo2>. To be fixed more
      properly in 4.3. (Patch posted, discussion, plan to take simple
      xencommons patch for 4.2 and revist for 4.3. Ping sent)

    * [BUG] xl allows same PCI device to be assigned to multiple
      guests. http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826

    * address PoD problems with early host side accesses to guest
      address space (Jan Beulich, DONE)

    * fix ipxe build problems with gcc 4.7 (fedora 17).
      The following files fail to build:
        - ipxe/src/drivers/bus/isa.c
        - ipxe/src/drivers/net/myri10ge.c
        - ipxe/src/drivers/infiniband/qib7322.c
      Patches have been posted to ipxe-devel mailinglist, so we need
      to update our ipxe version or grab the patches. (DONE, Keir)

    * "xl list -l" does not produce proper json. Should be possible to
      make it into an array. Reported by Bastian Blank,
      <20120814121741.GA10214@xxxxxxxxxxxxxxxxxxxxxxx>. (Ian Campbell,
      patch posted)

    * "xl cpupool-create" segfault on incorrect input. Reported by
      George Dunlap,
      (Ian Campbell, patch posted)

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.