[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] 4.2 TODO / Release Status



On Tue, 2012-05-08 at 10:34 +0100, Ian Campbell wrote:
> Plan for a 4.2 release:
> http://lists.xen.org/archives/html/xen-devel/2012-03/msg00793.html
> 
> The time line is as follows:
> 
> 19 March        -- TODO list locked down
> 2 April         -- Feature Freeze
>                                                 << WE ARE HERE
> Mid/Late May    -- First release candidate
> Weekly          -- RCN+1 until it is ready

I think the critical path here is the libxl stable API stuff.

Of these I think the elements are IanJ's series to handle fork etc and
various other bits, followed by Roger's hotplug stuff (which depends on
IanJ's stuff).

I'm bit unsure which bits of the following list are done, in-progress
(part of a posted or unposted series) or yet to be started. Could you
guys have a quick look through and let me know?

Also, is it worth trying to find some other heads to take over some of
the smaller side, non-dependent issues to clear your path (IanJ in
particular)?
 
> tools, blockers:
>       * libxl stable API -- we would like 4.2 to define a stable API
>         which downstream's can start to rely on not changing. Aspects of
>         this are:
>               * Safe fork vs. fd handling hooks. Involves API changes
>                 (Ian J, patches posted)
>               * libxl_wait_for_free_memory/libxl_wait_for_memory_target.
>                 Interface needs an overhaul, related to
>                 locking/serialization over domain create (deferred to
>                 4.3). IanJ to add note about this interface being
>                 substandard but otherwise defer to 4.3.
>               * libxl_*_path. Majority made internal, only configdir and
>                 lockdir remain public (used by xl). Good enough?
>               * Interfaces which may need to be async:
>                       * libxl_domain_suspend. Probably need to move
>                         xc_domain_save into a separate process, as per
>                         <20366.40183.410388.447630@xxxxxxxxxxxxxxxxxxxxxxxx>. 
> Likely need to do the same for xc_domain_restore too. I'm not sure if IanJ is 
> working (or planning to work on) this.
>                       * libxl_domain_create_{new,restore} -- IanJ has
>                         patches as part of event series.
>                       * libxl_domain_core_dump. Can take a dummy ao_how
>                         and remain synchronous internally. (IanC, patch
>                         posted)
>                       * libxl_device_{disk,nic,vkb,add}_add (and
>                         remove?). Roger Pau Monnà fixes disk as part of
>                         hotplug script series and adds infrastructure
>                         which should make the others trivial. (Roger
>                         investigating)
>                       * libxl_cdrom_insert. Should be easy once
>                         disk_{add,remove} done, IanJ to look at (or
>                         doing so?).
>                       * libxl_device_disk_local_{attach,detach}. Become
>                         internal as part of Stefano's domain 0 disk
>                         attach series (patches posted)
>                       * libxl_device_pci_{add,remove}. Roger
>                         investigating along with other add,remove
>                         functions.
>                       * libxl_xen_tmem_*. All functions are "fast" and
>                         therefore no async needed. Exception is
>                         tmem_destroy which Dan Magenheimer says is
>                         obsolete and can just be removed. (Ian C, patch
>                         to remove tmem_destroy included, DONE)
>                       * libxl_fork -- IanJ's event series will remove
>                         all users of this.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.